These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
5. Distributed associative memory network with memory refreshing loss. Park T; Choi I; Lee M Neural Netw; 2021 Dec; 144():33-48. PubMed ID: 34450445 [TBL] [Abstract][Full Text] [Related]
6. Continuous Online Sequence Learning with an Unsupervised Neural Network Model. Cui Y; Ahmad S; Hawkins J Neural Comput; 2016 Nov; 28(11):2474-2504. PubMed ID: 27626963 [TBL] [Abstract][Full Text] [Related]
7. Reservoir Memory Machines as Neural Computers. Paassen B; Schulz A; Stewart TC; Hammer B IEEE Trans Neural Netw Learn Syst; 2022 Jun; 33(6):2575-2585. PubMed ID: 34255637 [TBL] [Abstract][Full Text] [Related]
8. Compositional memory in attractor neural networks with one-step learning. Davis GP; Katz GE; Gentili RJ; Reggia JA Neural Netw; 2021 Jun; 138():78-97. PubMed ID: 33631609 [TBL] [Abstract][Full Text] [Related]
9. Human-level control through deep reinforcement learning. Mnih V; Kavukcuoglu K; Silver D; Rusu AA; Veness J; Bellemare MG; Graves A; Riedmiller M; Fidjeland AK; Ostrovski G; Petersen S; Beattie C; Sadik A; Antonoglou I; King H; Kumaran D; Wierstra D; Legg S; Hassabis D Nature; 2015 Feb; 518(7540):529-33. PubMed ID: 25719670 [TBL] [Abstract][Full Text] [Related]
10. Breaking Neural Reasoning Architectures With Metamorphic Relation-Based Adversarial Examples. Chan A; Ma L; Juefei-Xu F; Ong YS; Xie X; Xue M; Liu Y IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6976-6982. PubMed ID: 33886479 [TBL] [Abstract][Full Text] [Related]
11. BAM learning of nonlinearly separable tasks by using an asymmetrical output function and reinforcement learning. Chartier S; Boukadoum M; Amiri M IEEE Trans Neural Netw; 2009 Aug; 20(8):1281-92. PubMed ID: 19596635 [TBL] [Abstract][Full Text] [Related]
12. Neuroevolution of a Modular Memory-Augmented Neural Network for Deep Memory Problems. Khadka S; Chung JJ; Tumer K Evol Comput; 2019; 27(4):639-664. PubMed ID: 30407876 [TBL] [Abstract][Full Text] [Related]
13. Enhanced regularization for on-chip training using analog and temporary memory weights. Singhal R; Saraswat V; Deshmukh S; Subramoney S; Somappa L; Baghini MS; Ganguly U Neural Netw; 2023 Aug; 165():1050-1057. PubMed ID: 37478527 [TBL] [Abstract][Full Text] [Related]
14. Concept learning through deep reinforcement learning with memory-augmented neural networks. Shi J; Xu J; Yao Y; Xu B Neural Netw; 2019 Feb; 110():47-54. PubMed ID: 30496914 [TBL] [Abstract][Full Text] [Related]
15. The road to chaos by time-asymmetric Hebbian learning in recurrent neural networks. Molter C; Salihoglu U; Bersini H Neural Comput; 2007 Jan; 19(1):80-110. PubMed ID: 17134318 [TBL] [Abstract][Full Text] [Related]
17. A general framework for adaptive processing of data structures. Frasconi P; Gori M; Sperduti A IEEE Trans Neural Netw; 1998; 9(5):768-86. PubMed ID: 18255765 [TBL] [Abstract][Full Text] [Related]
18. A hybrid neural network of addressable and content-addressable memory. Ma J Int J Neural Syst; 2003 Jun; 13(3):205-13. PubMed ID: 12884453 [TBL] [Abstract][Full Text] [Related]
19. Elman backpropagation as reinforcement for simple recurrent networks. Grüning A Neural Comput; 2007 Nov; 19(11):3108-31. PubMed ID: 17883351 [TBL] [Abstract][Full Text] [Related]
20. Neural network processing of natural language: II. Towards a unified model of corticostriatal function in learning sentence comprehension and non-linguistic sequencing. Dominey PF; Inui T; Hoen M Brain Lang; 2009; 109(2-3):80-92. PubMed ID: 18835637 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]