These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

203 related articles for article (PubMed ID: 34217007)

  • 21. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Relation Extraction from Clinical Narratives Using Pre-trained Language Models.
    Wei Q; Ji Z; Si Y; Du J; Wang J; Tiryaki F; Wu S; Tao C; Roberts K; Xu H
    AMIA Annu Symp Proc; 2019; 2019():1236-1245. PubMed ID: 32308921
    [TBL] [Abstract][Full Text] [Related]  

  • 23. The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study.
    Rivera Zavala R; Martinez P
    JMIR Med Inform; 2020 Dec; 8(12):e18953. PubMed ID: 33270027
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques.
    Jaber A; Martínez P
    Methods Inf Med; 2022 Jun; 61(S 01):e28-e34. PubMed ID: 35104909
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Extraction of Information Related to Drug Safety Surveillance From Electronic Health Record Notes: Joint Modeling of Entities and Relations Using Knowledge-Aware Neural Attentive Models.
    Dandala B; Joopudi V; Tsou CH; Liang JJ; Suryanarayanan P
    JMIR Med Inform; 2020 Jul; 8(7):e18417. PubMed ID: 32459650
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning.
    Prottasha NJ; Sami AA; Kowsher M; Murad SA; Bairagi AK; Masud M; Baz M
    Sensors (Basel); 2022 May; 22(11):. PubMed ID: 35684778
    [TBL] [Abstract][Full Text] [Related]  

  • 27. BERT2OME: Prediction of 2'-O-Methylation Modifications From RNA Sequence by Transformer Architecture Based on BERT.
    Soylu NN; Sefer E
    IEEE/ACM Trans Comput Biol Bioinform; 2023; 20(3):2177-2189. PubMed ID: 37819796
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Transfer Learning from BERT to Support Insertion of New Concepts into SNOMED CT.
    Liu H; Perl Y; Geller J
    AMIA Annu Symp Proc; 2019; 2019():1129-1138. PubMed ID: 32308910
    [TBL] [Abstract][Full Text] [Related]  

  • 29. The Impact of Specialized Corpora for Word Embeddings in Natural Langage Understanding.
    Neuraz A; Rance B; Garcelon N; Llanos LC; Burgun A; Rosset S
    Stud Health Technol Inform; 2020 Jun; 270():432-436. PubMed ID: 32570421
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Using word embedding technique to efficiently represent protein sequences for identifying substrate specificities of transporters.
    Nguyen TT; Le NQ; Ho QT; Phan DV; Ou YY
    Anal Biochem; 2019 Jul; 577():73-81. PubMed ID: 31022378
    [TBL] [Abstract][Full Text] [Related]  

  • 31. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Comparing deep learning architectures for sentiment analysis on drug reviews.
    Colón-Ruiz C; Segura-Bedmar I
    J Biomed Inform; 2020 Oct; 110():103539. PubMed ID: 32818665
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Automatic extraction of cancer registry reportable information from free-text pathology reports using multitask convolutional neural networks.
    Alawad M; Gao S; Qiu JX; Yoon HJ; Blair Christian J; Penberthy L; Mumphrey B; Wu XC; Coyle L; Tourassi G
    J Am Med Inform Assoc; 2020 Jan; 27(1):89-98. PubMed ID: 31710668
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports.
    Bressem KK; Adams LC; Gaudin RA; Tröltzsch D; Hamm B; Makowski MR; Schüle CY; Vahldiek JL; Niehues SM
    Bioinformatics; 2021 Jan; 36(21):5255-5261. PubMed ID: 32702106
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Stacked DeBERT: All attention in incomplete data for text classification.
    Cunha Sergio G; Lee M
    Neural Netw; 2021 Apr; 136():87-96. PubMed ID: 33453522
    [TBL] [Abstract][Full Text] [Related]  

  • 36. iAMP-Attenpred: a novel antimicrobial peptide predictor based on BERT feature extraction method and CNN-BiLSTM-Attention combination model.
    Xing W; Zhang J; Li C; Huo Y; Dong G
    Brief Bioinform; 2023 Nov; 25(1):. PubMed ID: 38055840
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Generating contextual embeddings for emergency department chief complaints.
    Chang D; Hong WS; Taylor RA
    JAMIA Open; 2020 Jul; 3(2):160-166. PubMed ID: 32734154
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning.
    Datta S; Si Y; Rodriguez L; Shooshan SE; Demner-Fushman D; Roberts K
    J Biomed Inform; 2020 Aug; 108():103473. PubMed ID: 32562898
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Extracting Multiple Worries From Breast Cancer Patient Blogs Using Multilabel Classification With the Natural Language Processing Model Bidirectional Encoder Representations From Transformers: Infodemiology Study of Blogs.
    Watanabe T; Yada S; Aramaki E; Yajima H; Kizaki H; Hori S
    JMIR Cancer; 2022 Jun; 8(2):e37840. PubMed ID: 35657664
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study.
    Mitra A; Rawat BPS; McManus DD; Yu H
    JMIR Med Inform; 2021 Jul; 9(7):e27527. PubMed ID: 34255697
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 11.