These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

161 related articles for article (PubMed ID: 11110132)

  • 1. The Bayesian evidence scheme for regularizing probability-density estimating neural networks.
    Husmeier D
    Neural Comput; 2000 Nov; 12(11):2685-717. PubMed ID: 11110132
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Recursive Bayesian recurrent neural networks for time-series modeling.
    Mirikitani DT; Nikolaev N
    IEEE Trans Neural Netw; 2010 Feb; 21(2):262-74. PubMed ID: 20040415
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Generalized radial basis function networks for classification and novelty detection: self-organization of optimal Bayesian decision.
    Albrecht S; Busch J; Kloppenburg M; Metze F; Tavan P
    Neural Netw; 2000 Dec; 13(10):1075-93. PubMed ID: 11156189
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Density-driven generalized regression neural networks (DD-GRNN) for function approximation.
    Goulermas JY; Liatsis P; Zeng XJ; Cook P
    IEEE Trans Neural Netw; 2007 Nov; 18(6):1683-96. PubMed ID: 18051185
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Regularized variational Bayesian learning of echo state networks with delay&sum readout.
    Shutin D; Zechner C; Kulkarni SR; Poor HV
    Neural Comput; 2012 Apr; 24(4):967-95. PubMed ID: 22168555
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Bayesian Gaussian process classification with the EM-EP algorithm.
    Kim HC; Ghahramani Z
    IEEE Trans Pattern Anal Mach Intell; 2006 Dec; 28(12):1948-59. PubMed ID: 17108369
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Stochastic complexities of general mixture models in variational Bayesian learning.
    Watanabe K; Watanabe S
    Neural Netw; 2007 Mar; 20(2):210-9. PubMed ID: 16904288
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Invariance priors for Bayesian feed-forward neural networks.
    Toussaint UV; Gori S; Dose V
    Neural Netw; 2006 Dec; 19(10):1550-7. PubMed ID: 16580175
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Sparse Bayesian Classification of EEG for Brain-Computer Interface.
    Zhang Y; Zhou G; Jin J; Zhao Q; Wang X; Cichocki A
    IEEE Trans Neural Netw Learn Syst; 2016 Nov; 27(11):2256-2267. PubMed ID: 26415189
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A Bayesian approach to joint feature selection and classifier design.
    Krishnapuram B; Hartemink AJ; Carin L; Figueiredo MA
    IEEE Trans Pattern Anal Mach Intell; 2004 Sep; 26(9):1105-11. PubMed ID: 15742887
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Learning mixture models with the regularized latent maximum entropy principle.
    Wang S; Schuurmans D; Peng F; Zhao Y
    IEEE Trans Neural Netw; 2004 Jul; 15(4):903-16. PubMed ID: 15461082
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Variational learning for switching state-space models.
    Ghahramani Z; Hinton GE
    Neural Comput; 2000 Apr; 12(4):831-64. PubMed ID: 10770834
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Learning Gaussian mixture models with entropy-based criteria.
    Penalver Benavent A; Escolano Ruiz F; Saez JM
    IEEE Trans Neural Netw; 2009 Nov; 20(11):1756-71. PubMed ID: 19770090
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Stochastic organization of output codes in multiclass learning problems.
    Utschick W; Weichselberger W
    Neural Comput; 2001 May; 13(5):1065-102. PubMed ID: 11359645
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Neural network models for conditional distribution under bayesian analysis.
    Miazhynskaia T; Frühwirth-Schnatter S; Dorffner G
    Neural Comput; 2008 Feb; 20(2):504-22. PubMed ID: 18045023
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.
    Durstewitz D
    PLoS Comput Biol; 2017 Jun; 13(6):e1005542. PubMed ID: 28574992
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Superresolution with compound Markov random fields via the variational EM algorithm.
    Kanemura A; Maeda S; Ishii S
    Neural Netw; 2009 Sep; 22(7):1025-34. PubMed ID: 19157777
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A new EM-based training algorithm for RBF networks.
    Lázaro M; Santamaría I; Pantaleón C
    Neural Netw; 2003 Jan; 16(1):69-77. PubMed ID: 12576107
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Networks with trainable amplitude of activation functions.
    Trentin E
    Neural Netw; 2001 May; 14(4-5):471-93. PubMed ID: 11411633
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Nonlinear knowledge-based classification.
    Mangasarian OL; Wild EW
    IEEE Trans Neural Netw; 2008 Oct; 19(10):1826-32. PubMed ID: 18842487
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.