These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
161 related articles for article (PubMed ID: 11110132)
1. The Bayesian evidence scheme for regularizing probability-density estimating neural networks. Husmeier D Neural Comput; 2000 Nov; 12(11):2685-717. PubMed ID: 11110132 [TBL] [Abstract][Full Text] [Related]
2. Recursive Bayesian recurrent neural networks for time-series modeling. Mirikitani DT; Nikolaev N IEEE Trans Neural Netw; 2010 Feb; 21(2):262-74. PubMed ID: 20040415 [TBL] [Abstract][Full Text] [Related]
3. Generalized radial basis function networks for classification and novelty detection: self-organization of optimal Bayesian decision. Albrecht S; Busch J; Kloppenburg M; Metze F; Tavan P Neural Netw; 2000 Dec; 13(10):1075-93. PubMed ID: 11156189 [TBL] [Abstract][Full Text] [Related]
4. Density-driven generalized regression neural networks (DD-GRNN) for function approximation. Goulermas JY; Liatsis P; Zeng XJ; Cook P IEEE Trans Neural Netw; 2007 Nov; 18(6):1683-96. PubMed ID: 18051185 [TBL] [Abstract][Full Text] [Related]
6. Bayesian Gaussian process classification with the EM-EP algorithm. Kim HC; Ghahramani Z IEEE Trans Pattern Anal Mach Intell; 2006 Dec; 28(12):1948-59. PubMed ID: 17108369 [TBL] [Abstract][Full Text] [Related]
7. Stochastic complexities of general mixture models in variational Bayesian learning. Watanabe K; Watanabe S Neural Netw; 2007 Mar; 20(2):210-9. PubMed ID: 16904288 [TBL] [Abstract][Full Text] [Related]
13. Learning Gaussian mixture models with entropy-based criteria. Penalver Benavent A; Escolano Ruiz F; Saez JM IEEE Trans Neural Netw; 2009 Nov; 20(11):1756-71. PubMed ID: 19770090 [TBL] [Abstract][Full Text] [Related]
14. Stochastic organization of output codes in multiclass learning problems. Utschick W; Weichselberger W Neural Comput; 2001 May; 13(5):1065-102. PubMed ID: 11359645 [TBL] [Abstract][Full Text] [Related]
15. Neural network models for conditional distribution under bayesian analysis. Miazhynskaia T; Frühwirth-Schnatter S; Dorffner G Neural Comput; 2008 Feb; 20(2):504-22. PubMed ID: 18045023 [TBL] [Abstract][Full Text] [Related]
16. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements. Durstewitz D PLoS Comput Biol; 2017 Jun; 13(6):e1005542. PubMed ID: 28574992 [TBL] [Abstract][Full Text] [Related]
17. Superresolution with compound Markov random fields via the variational EM algorithm. Kanemura A; Maeda S; Ishii S Neural Netw; 2009 Sep; 22(7):1025-34. PubMed ID: 19157777 [TBL] [Abstract][Full Text] [Related]
18. A new EM-based training algorithm for RBF networks. Lázaro M; Santamaría I; Pantaleón C Neural Netw; 2003 Jan; 16(1):69-77. PubMed ID: 12576107 [TBL] [Abstract][Full Text] [Related]
19. Networks with trainable amplitude of activation functions. Trentin E Neural Netw; 2001 May; 14(4-5):471-93. PubMed ID: 11411633 [TBL] [Abstract][Full Text] [Related]