These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
142 related articles for article (PubMed ID: 39050673)
1. Gradient-free training of recurrent neural networks using random perturbations. Fernández JG; Keemink S; van Gerven M Front Neurosci; 2024; 18():1439155. PubMed ID: 39050673 [TBL] [Abstract][Full Text] [Related]
2. Online Spatio-Temporal Learning in Deep Neural Networks. Bohnstingl T; Wozniak S; Pantazi A; Eleftheriou E IEEE Trans Neural Netw Learn Syst; 2023 Nov; 34(11):8894-8908. PubMed ID: 35294357 [TBL] [Abstract][Full Text] [Related]
3. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. Song Q; Wu Y; Soh YC IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640 [TBL] [Abstract][Full Text] [Related]
4. Corrigendum: Gradient-free training of recurrent neural networks using random perturbations. Fernández JG; Keemink S; van Gerven M Front Neurosci; 2024; 18():1511916. PubMed ID: 39564528 [TBL] [Abstract][Full Text] [Related]
5. Comparing SNNs and RNNs on neuromorphic vision datasets: Similarities and differences. He W; Wu Y; Deng L; Li G; Wang H; Tian Y; Ding W; Wang W; Xie Y Neural Netw; 2020 Dec; 132():108-120. PubMed ID: 32866745 [TBL] [Abstract][Full Text] [Related]
6. Sensitivity - Local index to control chaoticity or gradient globally. Shibata K; Ejima T; Tokumaru Y; Matsuki T Neural Netw; 2021 Nov; 143():436-451. PubMed ID: 34271523 [TBL] [Abstract][Full Text] [Related]
7. Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics. Vlachas PR; Pathak J; Hunt BR; Sapsis TP; Girvan M; Ott E; Koumoutsakos P Neural Netw; 2020 Jun; 126():191-217. PubMed ID: 32248008 [TBL] [Abstract][Full Text] [Related]
8. E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware. Rostami A; Vogginger B; Yan Y; Mayr CG Front Neurosci; 2022; 16():1018006. PubMed ID: 36518534 [TBL] [Abstract][Full Text] [Related]
9. EXODUS: Stable and efficient training of spiking neural networks. Bauer FC; Lenz G; Haghighatshoar S; Sheik S Front Neurosci; 2023; 17():1110444. PubMed ID: 36845419 [TBL] [Abstract][Full Text] [Related]
10. Efficient training of spiking neural networks with temporally-truncated local backpropagation through time. Guo W; Fouda ME; Eltawil AM; Salama KN Front Neurosci; 2023; 17():1047008. PubMed ID: 37090791 [TBL] [Abstract][Full Text] [Related]
11. Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias. Laborieux A; Ernoult M; Scellier B; Bengio Y; Grollier J; Querlioz D Front Neurosci; 2021; 15():633674. PubMed ID: 33679315 [TBL] [Abstract][Full Text] [Related]
12. Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations. Ororbia A; Mali A; Giles CL; Kifer D IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4267-4278. PubMed ID: 31976910 [TBL] [Abstract][Full Text] [Related]
13. SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training. Liu F; Zhao W; Chen Y; Wang Z; Yang T; Jiang L Front Neurosci; 2021; 15():756876. PubMed ID: 34803591 [TBL] [Abstract][Full Text] [Related]
14. Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient. Liang L; Hu X; Deng L; Wu Y; Li G; Ding Y; Li P; Xie Y IEEE Trans Neural Netw Learn Syst; 2023 May; 34(5):2569-2583. PubMed ID: 34473634 [TBL] [Abstract][Full Text] [Related]
15. Backpropagation algorithms for a broad class of dynamic networks. De Jesús O; Hagan MT IEEE Trans Neural Netw; 2007 Jan; 18(1):14-27. PubMed ID: 17278458 [TBL] [Abstract][Full Text] [Related]
16. Continual Sequence Modeling With Predictive Coding. Annabi L; Pitti A; Quoy M Front Neurorobot; 2022; 16():845955. PubMed ID: 35686118 [TBL] [Abstract][Full Text] [Related]