These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
142 related articles for article (PubMed ID: 11387051)
1. Attractor networks for shape recognition. Amit Y; Mascaro M Neural Comput; 2001 Jun; 13(6):1415-42. PubMed ID: 11387051 [TBL] [Abstract][Full Text] [Related]
2. Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs. Amit Y; Walker J Front Comput Neurosci; 2012; 6():39. PubMed ID: 22737121 [TBL] [Abstract][Full Text] [Related]
3. Slow stochastic Hebbian learning of classes of stimuli in a recurrent neural network. Brunel N; Carusi F; Fusi S Network; 1998 Feb; 9(1):123-52. PubMed ID: 9861982 [TBL] [Abstract][Full Text] [Related]
5. A hybrid learning network for shift, orientation, and scaling invariant pattern recognition. Wang R Network; 2001 Nov; 12(4):493-512. PubMed ID: 11762901 [TBL] [Abstract][Full Text] [Related]
6. Learning viewpoint-invariant face representations from visual experience in an attractor network. Bartlett MS; Sejnowski TJ Network; 1998 Aug; 9(3):399-417. PubMed ID: 9861998 [TBL] [Abstract][Full Text] [Related]
7. Pattern storage and processing in attractor networks with short-time synaptic dynamics. Bibitchkov D; Herrmann JM; Geisel T Network; 2002 Feb; 13(1):115-29. PubMed ID: 11873841 [TBL] [Abstract][Full Text] [Related]
8. Learning associative memories by error backpropagation. Zheng P; Zhang J; Tang W IEEE Trans Neural Netw; 2011 Mar; 22(3):347-55. PubMed ID: 21189234 [TBL] [Abstract][Full Text] [Related]
9. A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Auer P; Burgsteiner H; Maass W Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524 [TBL] [Abstract][Full Text] [Related]
10. Hebbian learning of context in recurrent neural networks. Brunel N Neural Comput; 1996 Nov; 8(8):1677-710. PubMed ID: 8888613 [TBL] [Abstract][Full Text] [Related]
11. Memory dynamics in attractor networks. Li G; Ramanathan K; Ning N; Shi L; Wen C Comput Intell Neurosci; 2015; 2015():191745. PubMed ID: 25960737 [TBL] [Abstract][Full Text] [Related]
12. Noise tolerance of attractor and feedforward memory models. Lim S; Goldman MS Neural Comput; 2012 Feb; 24(2):332-90. PubMed ID: 22091664 [TBL] [Abstract][Full Text] [Related]
13. Convergence of stochastic learning in perceptrons with binary synapses. Senn W; Fusi S Phys Rev E Stat Nonlin Soft Matter Phys; 2005 Jun; 71(6 Pt 1):061907. PubMed ID: 16089765 [TBL] [Abstract][Full Text] [Related]
14. Neural Classifiers with Limited Connectivity and Recurrent Readouts. Kushnir L; Fusi S J Neurosci; 2018 Nov; 38(46):9900-9924. PubMed ID: 30249794 [TBL] [Abstract][Full Text] [Related]
15. NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns. Chartier S; Proulx R IEEE Trans Neural Netw; 2005 Nov; 16(6):1393-400. PubMed ID: 16342483 [TBL] [Abstract][Full Text] [Related]
16. A Gaussian attractor network for memory and recognition with experience-dependent learning. Hu X; Zhang B Neural Comput; 2010 May; 22(5):1333-57. PubMed ID: 20100070 [TBL] [Abstract][Full Text] [Related]
17. Retrospective and prospective persistent activity induced by Hebbian learning in a recurrent cortical network. Mongillo G; Amit DJ; Brunel N Eur J Neurosci; 2003 Oct; 18(7):2011-24. PubMed ID: 14622234 [TBL] [Abstract][Full Text] [Related]
19. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems. Giulioni M; Corradi F; Dante V; del Giudice P Sci Rep; 2015 Oct; 5():14730. PubMed ID: 26463272 [TBL] [Abstract][Full Text] [Related]
20. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks. Alemi A; Baldassi C; Brunel N; Zecchina R PLoS Comput Biol; 2015 Aug; 11(8):e1004439. PubMed ID: 26291608 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]