These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

168 related articles for article (PubMed ID: 18249962)

  • 1. LSTM recurrent networks learn simple context-free and context-sensitive languages.
    Gers FA; Schmidhuber E
    IEEE Trans Neural Netw; 2001; 12(6):1333-40. PubMed ID: 18249962
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.
    Zazo R; Lozano-Diez A; Gonzalez-Dominguez J; Toledano DT; Gonzalez-Rodriguez J
    PLoS One; 2016; 11(1):e0146917. PubMed ID: 26824467
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures.
    Yu Y; Si X; Hu C; Zhang J
    Neural Comput; 2019 Jul; 31(7):1235-1270. PubMed ID: 31113301
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Incremental training of first order recurrent neural networks to predict a context-sensitive language.
    Chalup SK; Blair AD
    Neural Netw; 2003 Sep; 16(7):955-72. PubMed ID: 14692631
    [TBL] [Abstract][Full Text] [Related]  

  • 5. On learning context-free and context-sensitive languages.
    Boden M; Wiles J
    IEEE Trans Neural Netw; 2002; 13(2):491-3. PubMed ID: 18244451
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Learning nonregular languages: a comparison of simple recurrent networks and LSTM.
    Schmidhuber J; Gers F; Eck D
    Neural Comput; 2002 Sep; 14(9):2039-41. PubMed ID: 12184841
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Framewise phoneme classification with bidirectional LSTM and other neural network architectures.
    Graves A; Schmidhuber J
    Neural Netw; 2005; 18(5-6):602-10. PubMed ID: 16112549
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Learning to forget: continual prediction with LSTM.
    Gers FA; Schmidhuber J; Cummins F
    Neural Comput; 2000 Oct; 12(10):2451-71. PubMed ID: 11032042
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Working Memory Connections for LSTM.
    Landi F; Baraldi L; Cornia M; Cucchiara R
    Neural Netw; 2021 Dec; 144():334-341. PubMed ID: 34547671
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Explicit Duration Recurrent Networks.
    Yu SZ
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):3120-3130. PubMed ID: 33497341
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A maze learning comparison of Elman, long short-term memory, and Mona neural networks.
    Portegys TE
    Neural Netw; 2010 Mar; 23(2):306-13. PubMed ID: 19945822
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A critical review of RNN and LSTM variants in hydrological time series predictions.
    Waqas M; Humphries UW
    MethodsX; 2024 Dec; 13():102946. PubMed ID: 39324077
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Training recurrent networks by Evolino.
    Schmidhuber J; Wierstra D; Gagliolo M; Gomez F
    Neural Comput; 2007 Mar; 19(3):757-79. PubMed ID: 17298232
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Achieving Online Regression Performance of LSTMs With Simple RNNs.
    Vural NM; Ilhan F; Yilmaz SF; Ergut S; Kozat SS
    IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7632-7643. PubMed ID: 34138720
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Subtraction Gates: Another Way to Learn Long-Term Dependencies in Recurrent Neural Networks.
    He T; Mao H; Yi Z
    IEEE Trans Neural Netw Learn Syst; 2022 Apr; 33(4):1740-1751. PubMed ID: 33373305
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets.
    PĂ©rez-Ortiz JA; Gers FA; Eck D; Schmidhuber J
    Neural Netw; 2003 Mar; 16(2):241-50. PubMed ID: 12628609
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Simple recurrent networks learn context-free and context-sensitive languages by counting.
    Rodriguez P
    Neural Comput; 2001 Sep; 13(9):2093-118. PubMed ID: 11516359
    [TBL] [Abstract][Full Text] [Related]  

  • 18. SGORNN: Combining scalar gates and orthogonal constraints in recurrent networks.
    Taylor-Melanson W; Ferreira MD; Matwin S
    Neural Netw; 2023 Feb; 159():25-33. PubMed ID: 36525915
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A New Delay Connection for Long Short-Term Memory Networks.
    Wang J; Zhang L; Chen Y; Yi Z
    Int J Neural Syst; 2018 Aug; 28(6):1750061. PubMed ID: 29382286
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Neuroevolution of a Modular Memory-Augmented Neural Network for Deep Memory Problems.
    Khadka S; Chung JJ; Tumer K
    Evol Comput; 2019; 27(4):639-664. PubMed ID: 30407876
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.