220 related articles for article (PubMed ID: 32454478)
1. Synapse cell optimization and back-propagation algorithm implementation in a domain wall synapse based crossbar neural network for scalable on-chip learning.
Kaushik D; Sharda J; Bhowmik D
Nanotechnology; 2020 Sep; 31(36):364004. PubMed ID: 32454478
[TBL] [Abstract][Full Text] [Related]
2. Sign backpropagation: An on-chip learning algorithm for analog RRAM neuromorphic computing systems.
Zhang Q; Wu H; Yao P; Zhang W; Gao B; Deng N; Qian H
Neural Netw; 2018 Dec; 108():217-223. PubMed ID: 30216871
[TBL] [Abstract][Full Text] [Related]
3. On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.
Kwon D; Lim S; Bae JH; Lee ST; Kim H; Seo YT; Oh S; Kim J; Yeom K; Park BG; Lee JH
Front Neurosci; 2020; 14():423. PubMed ID: 32733180
[TBL] [Abstract][Full Text] [Related]
4. Threshold learning algorithm for memristive neural network with binary switching behavior.
Youn S; Hwang Y; Kim TH; Kim S; Hwang H; Park J; Kim H
Neural Netw; 2024 Aug; 176():106355. PubMed ID: 38759411
[TBL] [Abstract][Full Text] [Related]
5. Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.
Miranda E; Suñé J
Materials (Basel); 2020 Feb; 13(4):. PubMed ID: 32093164
[TBL] [Abstract][Full Text] [Related]
6. Novel deep neural network based pattern field classification architectures.
Huang K; Zhang S; Zhang R; Hussain A
Neural Netw; 2020 Jul; 127():82-95. PubMed ID: 32344155
[TBL] [Abstract][Full Text] [Related]
7. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
[TBL] [Abstract][Full Text] [Related]
8. Analysis of the Memristor-Based Crossbar Synapse for Neuromorphic Systems.
Kim B; Jo S; Sun W; Shin H
J Nanosci Nanotechnol; 2019 Oct; 19(10):6703-6709. PubMed ID: 31027014
[TBL] [Abstract][Full Text] [Related]
9. Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search.
Mao R; Wen B; Kazemi A; Zhao Y; Laguna AF; Lin R; Wong N; Niemier M; Hu XS; Sheng X; Graves CE; Strachan JP; Li C
Nat Commun; 2022 Oct; 13(1):6284. PubMed ID: 36271072
[TBL] [Abstract][Full Text] [Related]
10. A hardware efficient cascadable chip set for ANN's with on-chip backpropagation.
Lehmann T
Int J Neural Syst; 1993 Dec; 4(4):351-8. PubMed ID: 8049798
[TBL] [Abstract][Full Text] [Related]
11. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization.
Kulkarni SR; Rajendran B
Neural Netw; 2018 Jul; 103():118-127. PubMed ID: 29674234
[TBL] [Abstract][Full Text] [Related]
12. Online Supervised Learning for Hardware-Based Multilayer Spiking Neural Networks Through the Modulation of Weight-Dependent Spike-Timing-Dependent Plasticity.
Zheng N; Mazumder P
IEEE Trans Neural Netw Learn Syst; 2018 Sep; 29(9):4287-4302. PubMed ID: 29990088
[TBL] [Abstract][Full Text] [Related]
13. Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu CY; Chen C; Chang CT; Shih LM
Neural Netw; 2013 Sep; 45():144-50. PubMed ID: 23545155
[TBL] [Abstract][Full Text] [Related]
14. An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.
Huynh HT; Won Y; Kim JJ
Int J Neural Syst; 2008 Oct; 18(5):433-41. PubMed ID: 18991365
[TBL] [Abstract][Full Text] [Related]
15. Neural Network Training Acceleration With RRAM-Based Hybrid Synapses.
Choi W; Kwak M; Kim S; Hwang H
Front Neurosci; 2021; 15():690418. PubMed ID: 34248492
[TBL] [Abstract][Full Text] [Related]
16. Novel maximum-margin training algorithms for supervised neural networks.
Ludwig O; Nunes U
IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
[TBL] [Abstract][Full Text] [Related]
17. Enabling Training of Neural Networks on Noisy Hardware.
Gokmen T
Front Artif Intell; 2021; 4():699148. PubMed ID: 34568813
[TBL] [Abstract][Full Text] [Related]
18. Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain.
Song Q; Wu Y; Soh YC
IEEE Trans Neural Netw; 2008 Nov; 19(11):1841-53. PubMed ID: 18990640
[TBL] [Abstract][Full Text] [Related]
19. Enhanced regularization for on-chip training using analog and temporary memory weights.
Singhal R; Saraswat V; Deshmukh S; Subramoney S; Somappa L; Baghini MS; Ganguly U
Neural Netw; 2023 Aug; 165():1050-1057. PubMed ID: 37478527
[TBL] [Abstract][Full Text] [Related]
20. Algorithm for Training Neural Networks on Resistive Device Arrays.
Gokmen T; Haensch W
Front Neurosci; 2020; 14():103. PubMed ID: 32174807
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]