158 related articles for article (PubMed ID: 32739651)
1. Two-hidden-layer feed-forward networks are universal approximators: A constructive approach.
Paluzo-Hidalgo E; Gonzalez-Diaz R; Gutiérrez-Naranjo MA
Neural Netw; 2020 Nov; 131():29-36. PubMed ID: 32739651
[TBL] [Abstract][Full Text] [Related]
2. Universal approximation using incremental constructive feedforward networks with random hidden nodes.
Huang GB; Chen L; Siew CK
IEEE Trans Neural Netw; 2006 Jul; 17(4):879-892. PubMed ID: 16856652
[TBL] [Abstract][Full Text] [Related]
3. A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks.
Calafiore GC; Gaubert S; Possieri C
IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5603-5612. PubMed ID: 32167912
[TBL] [Abstract][Full Text] [Related]
4. Constructive approximation to multivariate function by decay RBF neural network.
Hou M; Han X
IEEE Trans Neural Netw; 2010 Sep; 21(9):1517-23. PubMed ID: 20693108
[TBL] [Abstract][Full Text] [Related]
5. Neural networks with a continuous squashing function in the output are universal approximators.
Castro JL; Mantas CJ; Benítez JM
Neural Netw; 2000 Jul; 13(6):561-3. PubMed ID: 10987509
[TBL] [Abstract][Full Text] [Related]
6. Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks.
Patan K
Neural Netw; 2008 Jan; 21(1):59-64. PubMed ID: 18158233
[TBL] [Abstract][Full Text] [Related]
7. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
Auer P; Burgsteiner H; Maass W
Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
[TBL] [Abstract][Full Text] [Related]
8. On the approximation by single hidden layer feedforward neural networks with fixed weights.
Guliyev NJ; Ismailov VE
Neural Netw; 2018 Feb; 98():296-304. PubMed ID: 29301110
[TBL] [Abstract][Full Text] [Related]
9. Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu CY; Chen C; Chang CT; Shih LM
Neural Netw; 2013 Sep; 45():144-50. PubMed ID: 23545155
[TBL] [Abstract][Full Text] [Related]
10. Relaxed conditions for radial-basis function networks to be universal approximators.
Liao Y; Fang SC; Nuttle HL
Neural Netw; 2003 Sep; 16(7):1019-28. PubMed ID: 14692636
[TBL] [Abstract][Full Text] [Related]
11. Neural network approximation: Three hidden layers are enough.
Shen Z; Yang H; Zhang S
Neural Netw; 2021 Sep; 141():160-173. PubMed ID: 33906082
[TBL] [Abstract][Full Text] [Related]
12. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.
Aguiar MA; Dias AP; Ferreira F
Chaos; 2017 Jan; 27(1):013103. PubMed ID: 28147492
[TBL] [Abstract][Full Text] [Related]
13. Constructive function-approximation by three-layer artificial neural networks.
Suzuki S
Neural Netw; 1998 Aug; 11(6):1049-1058. PubMed ID: 12662774
[TBL] [Abstract][Full Text] [Related]
14. A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function.
Guliyev NJ; Ismailov VE
Neural Comput; 2016 Jul; 28(7):1289-304. PubMed ID: 27171269
[TBL] [Abstract][Full Text] [Related]
15. Local coupled feedforward neural network.
Sun J
Neural Netw; 2010 Jan; 23(1):108-13. PubMed ID: 19596550
[TBL] [Abstract][Full Text] [Related]
16. Parameterized Convex Universal Approximators for Decision-Making Problems.
Kim J; Kim Y
IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):2448-2459. PubMed ID: 35857729
[TBL] [Abstract][Full Text] [Related]
17. Universal approximation of extreme learning machine with adaptive growth of hidden nodes.
Zhang R; Lan Y; Huang GB; Xu ZB
IEEE Trans Neural Netw Learn Syst; 2012 Feb; 23(2):365-71. PubMed ID: 24808516
[TBL] [Abstract][Full Text] [Related]
18. Optimizing the Simplicial-Map Neural Network Architecture.
Paluzo-Hidalgo E; Gonzalez-Diaz R; Gutiérrez-Naranjo MA; Heras J
J Imaging; 2021 Sep; 7(9):. PubMed ID: 34564099
[TBL] [Abstract][Full Text] [Related]
19. On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review.
Laudani A; Lozito GM; Riganti Fulginei F; Salvini A
Comput Intell Neurosci; 2015; 2015():818243. PubMed ID: 26417368
[TBL] [Abstract][Full Text] [Related]
20. The Kolmogorov-Arnold representation theorem revisited.
Schmidt-Hieber J
Neural Netw; 2021 May; 137():119-126. PubMed ID: 33592434
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]