BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

139 related articles for article (PubMed ID: 38875537)

  • 1. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study.
    Sivarajkumar S; Kelley M; Samolyk-Mazzanti A; Visweswaran S; Wang Y
    JMIR Med Inform; 2024 Apr; 12():e55318. PubMed ID: 38587879
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A large language model-based generative natural language processing framework fine-tuned on clinical notes accurately extracts headache frequency from electronic health records.
    Chiang CC; Luo M; Dumkrieger G; Trivedi S; Chen YC; Chao CJ; Schwedt TJ; Sarker A; Banerjee I
    Headache; 2024 Apr; 64(4):400-409. PubMed ID: 38525734
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning.
    Mahajan D; Poddar A; Liang JJ; Lin YT; Prager JM; Suryanarayanan P; Raghavan P; Tsou CH
    JMIR Med Inform; 2020 Nov; 8(11):e22508. PubMed ID: 33245284
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Large Language Model-Based Generative Natural Language Processing Framework Finetuned on Clinical Notes Accurately Extracts Headache Frequency from Electronic Health Records.
    Chiang CC; Luo M; Dumkrieger G; Trivedi S; Chen YC; Chao CJ; Schwedt TJ; Sarker A; Banerjee I
    medRxiv; 2023 Oct; ():. PubMed ID: 37873417
    [TBL] [Abstract][Full Text] [Related]  

  • 6. When BERT meets Bilbo: a learning curve analysis of pretrained language model on disease classification.
    Li X; Yuan W; Peng D; Mei Q; Wang Y
    BMC Med Inform Decis Mak; 2022 Apr; 21(Suppl 9):377. PubMed ID: 35382811
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Extracting comprehensive clinical information for breast cancer using deep learning methods.
    Zhang X; Zhang Y; Zhang Q; Ren Y; Qiu T; Ma J; Sun Q
    Int J Med Inform; 2019 Dec; 132():103985. PubMed ID: 31627032
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Relation Classification for Bleeding Events From Electronic Health Records Using Deep Learning Systems: An Empirical Study.
    Mitra A; Rawat BPS; McManus DD; Yu H
    JMIR Med Inform; 2021 Jul; 9(7):e27527. PubMed ID: 34255697
    [TBL] [Abstract][Full Text] [Related]  

  • 9. BioBERT: a pre-trained biomedical language representation model for biomedical text mining.
    Lee J; Yoon W; Kim S; Kim D; Kim S; So CH; Kang J
    Bioinformatics; 2020 Feb; 36(4):1234-1240. PubMed ID: 31501885
    [TBL] [Abstract][Full Text] [Related]  

  • 10. RadBERT: Adapting Transformer-based Language Models to Radiology.
    Yan A; McAuley J; Lu X; Du J; Chang EY; Gentili A; Hsu CN
    Radiol Artif Intell; 2022 Jul; 4(4):e210258. PubMed ID: 35923376
    [TBL] [Abstract][Full Text] [Related]  

  • 11. BioBERT and Similar Approaches for Relation Extraction.
    Bhasuran B
    Methods Mol Biol; 2022; 2496():221-235. PubMed ID: 35713867
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Transformers-sklearn: a toolkit for medical language understanding with transformer-based models.
    Yang F; Wang X; Ma H; Li J
    BMC Med Inform Decis Mak; 2021 Jul; 21(Suppl 2):90. PubMed ID: 34330244
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study.
    Li F; Jin Y; Liu W; Rawat BPS; Cai P; Yu H
    JMIR Med Inform; 2019 Sep; 7(3):e14830. PubMed ID: 31516126
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Oversampling effect in pretraining for bidirectional encoder representations from transformers (BERT) to localize medical BERT and enhance biomedical BERT.
    Wada S; Takeda T; Okada K; Manabe S; Konishi S; Kamohara J; Matsumura Y
    Artif Intell Med; 2024 Jul; 153():102889. PubMed ID: 38728811
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Few-shot learning for medical text: A review of advances, trends, and opportunities.
    Ge Y; Guo Y; Das S; Al-Garadi MA; Sarker A
    J Biomed Inform; 2023 Aug; 144():104458. PubMed ID: 37488023
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Evaluation of GPT and BERT-based models on identifying proteinprotein interactions in biomedical text.
    Rehana H; Çam NB; Basmaci M; Zheng J; Jemiyo C; He Y; Özgür A; Hur J
    ArXiv; 2023 Dec; ():. PubMed ID: 38764593
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Large language models for biomedicine: foundations, opportunities, challenges, and best practices.
    Sahoo SS; Plasek JM; Xu H; Uzuner Ö; Cohen T; Yetisgen M; Liu H; Meystre S; Wang Y
    J Am Med Inform Assoc; 2024 Apr; ():. PubMed ID: 38657567
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A comparison of few-shot and traditional named entity recognition models for medical text.
    Ge Y; Guo Y; Yang YC; Al-Garadi MA; Sarker A
    IEEE Int Conf Healthc Inform; 2022 Jun; 2022():84-89. PubMed ID: 37641590
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Transformer versus traditional natural language processing: how much data is enough for automated radiology report classification?
    Yang E; Li MD; Raghavan S; Deng F; Lang M; Succi MD; Huang AJ; Kalpathy-Cramer J
    Br J Radiol; 2023 Sep; 96(1149):20220769. PubMed ID: 37162253
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Disease Concept-Embedding Based on the Self-Supervised Method for Medical Information Extraction from Electronic Health Records and Disease Retrieval: Algorithm Development and Validation Study.
    Chen YP; Lo YH; Lai F; Huang CH
    J Med Internet Res; 2021 Jan; 23(1):e25113. PubMed ID: 33502324
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.