These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

157 related articles for article (PubMed ID: 22655041)

  • 1. Transferring learning from external to internal weights in echo-state networks with sparse connectivity.
    Sussillo D; Abbott LF
    PLoS One; 2012; 7(5):e37372. PubMed ID: 22655041
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A Geometrical Analysis of Global Stability in Trained Feedback Networks.
    Mastrogiuseppe F; Ostojic S
    Neural Comput; 2019 Jun; 31(6):1139-1182. PubMed ID: 30979353
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Multi-source sequential knowledge regression by using transfer RNN units.
    Xie X; Liu G; Cai Q; Wei P; Qu H
    Neural Netw; 2019 Nov; 119():151-161. PubMed ID: 31446234
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A novel time series analysis approach for prediction of dialysis in critically ill patients using echo-state networks.
    Verplancke T; Van Looy S; Steurbaut K; Benoit D; De Turck F; De Moor G; Decruyenaere J
    BMC Med Inform Decis Mak; 2010 Jan; 10():4. PubMed ID: 20092639
    [TBL] [Abstract][Full Text] [Related]  

  • 5. full-FORCE: A target-based method for training recurrent networks.
    DePasquale B; Cueva CJ; Rajan K; Escola GS; Abbott LF
    PLoS One; 2018; 13(2):e0191527. PubMed ID: 29415041
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Online sequential echo state network with sparse RLS algorithm for time series prediction.
    Yang C; Qiao J; Ahmad Z; Nie K; Wang L
    Neural Netw; 2019 Oct; 118():32-42. PubMed ID: 31228722
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Local online learning in recurrent networks with random feedback.
    Murray JM
    Elife; 2019 May; 8():. PubMed ID: 31124785
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Three learning phases for radial-basis-function networks.
    Schwenker F; Kestler HA; Palm G
    Neural Netw; 2001 May; 14(4-5):439-58. PubMed ID: 11411631
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A generalized LSTM-like training algorithm for second-order recurrent neural networks.
    Monner D; Reggia JA
    Neural Netw; 2012 Jan; 25(1):70-83. PubMed ID: 21803542
    [TBL] [Abstract][Full Text] [Related]  

  • 10. On the Post Hoc Explainability of Optimized Self-Organizing Reservoir Network for Action Recognition.
    Lee GC; Loo CK
    Sensors (Basel); 2022 Mar; 22(5):. PubMed ID: 35271052
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops.
    Stelzer F; Röhm A; Vicente R; Fischer I; Yanchuk S
    Nat Commun; 2021 Aug; 12(1):5164. PubMed ID: 34453053
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Deep Sparse Learning for Automatic Modulation Classification Using Recurrent Neural Networks.
    Zang K; Wu W; Luo W
    Sensors (Basel); 2021 Sep; 21(19):. PubMed ID: 34640730
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Multifeedback-layer neural network.
    Savran A
    IEEE Trans Neural Netw; 2007 Mar; 18(2):373-84. PubMed ID: 17385626
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Biologically plausible single-layer networks for nonnegative independent component analysis.
    Lipshutz D; Pehlevan C; Chklovskii DB
    Biol Cybern; 2022 Dec; 116(5-6):557-568. PubMed ID: 36070103
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Nonlinear system modeling with random matrices: echo state networks revisited.
    Zhang B; Miller DJ; Wang Y
    IEEE Trans Neural Netw Learn Syst; 2012 Jan; 23(1):175-82. PubMed ID: 24808467
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Direct Feedback Alignment With Sparse Connections for Local Learning.
    Crafton B; Parihar A; Gebhardt E; Raychowdhury A
    Front Neurosci; 2019; 13():525. PubMed ID: 31178689
    [TBL] [Abstract][Full Text] [Related]  

  • 17. SpaRCe: Improved Learning of Reservoir Computing Systems Through Sparse Representations.
    Manneschi L; Lin AC; Vasilaki E
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):824-838. PubMed ID: 34398765
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Adaptive Global Sliding-Mode Control for Dynamic Systems Using Double Hidden Layer Recurrent Neural Network Structure.
    Chu Y; Fei J; Hou S
    IEEE Trans Neural Netw Learn Syst; 2020 Apr; 31(4):1297-1309. PubMed ID: 31247575
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations.
    Ororbia A; Mali A; Giles CL; Kifer D
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4267-4278. PubMed ID: 31976910
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Universality of gradient descent neural network training.
    Welper G
    Neural Netw; 2022 Jun; 150():259-273. PubMed ID: 35334438
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.