118 related articles for article (PubMed ID: 38090874)
1. Universality and Approximation Bounds for Echo State Networks With Random Weights.
Li Z; Yang Y
IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38090874
[TBL] [Abstract][Full Text] [Related]
2. Embedding and approximation theorems for echo state networks.
Hart A; Hook J; Dawes J
Neural Netw; 2020 Aug; 128():234-247. PubMed ID: 32447266
[TBL] [Abstract][Full Text] [Related]
3. Nonlinear system modeling with random matrices: echo state networks revisited.
Zhang B; Miller DJ; Wang Y
IEEE Trans Neural Netw Learn Syst; 2012 Jan; 23(1):175-82. PubMed ID: 24808467
[TBL] [Abstract][Full Text] [Related]
4. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
Petersen P; Voigtlaender F
Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
[TBL] [Abstract][Full Text] [Related]
5. Analysis and design of echo state networks.
Ozturk MC; Xu D; Príncipe JC
Neural Comput; 2007 Jan; 19(1):111-38. PubMed ID: 17134319
[TBL] [Abstract][Full Text] [Related]
6. Neural networks with ReLU powers need less depth.
Cabanilla KIM; Mohammad RZ; Lope JEC
Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
[TBL] [Abstract][Full Text] [Related]
7. Approximation in shift-invariant spaces with deep ReLU neural networks.
Yang Y; Li Z; Wang Y
Neural Netw; 2022 Sep; 153():269-281. PubMed ID: 35763879
[TBL] [Abstract][Full Text] [Related]
8. Deep ReLU neural networks in high-dimensional approximation.
Dũng D; Nguyen VK
Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
[TBL] [Abstract][Full Text] [Related]
9. Approximation of smooth functionals using deep ReLU networks.
Song L; Liu Y; Fan J; Zhou DX
Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
[TBL] [Abstract][Full Text] [Related]
10. Echo state networks are universal.
Grigoryeva L; Ortega JP
Neural Netw; 2018 Dec; 108():495-508. PubMed ID: 30317134
[TBL] [Abstract][Full Text] [Related]
11. On the approximation of functions by tanh neural networks.
De Ryck T; Lanthaler S; Mishra S
Neural Netw; 2021 Nov; 143():732-750. PubMed ID: 34482172
[TBL] [Abstract][Full Text] [Related]
12. Error bounds for approximations with deep ReLU networks.
Yarotsky D
Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
[TBL] [Abstract][Full Text] [Related]
13. Cluster-Based Input Weight Initialization for Echo State Networks.
Steiner P; Jalalvand A; Birkholz P
IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):7648-7659. PubMed ID: 35120012
[TBL] [Abstract][Full Text] [Related]
14. Fading memory echo state networks are universal.
Gonon L; Ortega JP
Neural Netw; 2021 Jun; 138():10-13. PubMed ID: 33611064
[TBL] [Abstract][Full Text] [Related]
15. Effects of spectral radius and settling time in the performance of echo state networks.
Venayagamoorthy GK; Shishir B
Neural Netw; 2009 Sep; 22(7):861-3. PubMed ID: 19423285
[TBL] [Abstract][Full Text] [Related]
16. Domain-driven models yield better predictions at lower cost than reservoir computers in Lorenz systems.
Pyle R; Jovanovic N; Subramanian D; Palem KV; Patel AB
Philos Trans A Math Phys Eng Sci; 2021 Apr; 379(2194):20200246. PubMed ID: 33583272
[TBL] [Abstract][Full Text] [Related]
17. An associative memory readout for ESNs with applications to dynamical pattern recognition.
Ozturk MC; Principe JC
Neural Netw; 2007 Apr; 20(3):377-90. PubMed ID: 17513087
[TBL] [Abstract][Full Text] [Related]
18. A Function Space Analysis of Finite Neural Networks With Insights From Sampling Theory.
Giryes R
IEEE Trans Pattern Anal Mach Intell; 2023 Jan; 45(1):27-37. PubMed ID: 35230946
[TBL] [Abstract][Full Text] [Related]
19. Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem.
Montanelli H; Yang H
Neural Netw; 2020 Sep; 129():1-6. PubMed ID: 32473577
[TBL] [Abstract][Full Text] [Related]
20. On the approximation by single hidden layer feedforward neural networks with fixed weights.
Guliyev NJ; Ismailov VE
Neural Netw; 2018 Feb; 98():296-304. PubMed ID: 29301110
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]