BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

147 related articles for article (PubMed ID: 38827097)

  • 1. A Study of Biomedical Relation Extraction Using GPT Models.
    Zhang J; Wibert M; Zhou H; Peng X; Chen Q; Keloth VK; Hu Y; Zhang R; Xu H; Raja K
    AMIA Jt Summits Transl Sci Proc; 2024; 2024():391-400. PubMed ID: 38827097
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study.
    Sivarajkumar S; Kelley M; Samolyk-Mazzanti A; Visweswaran S; Wang Y
    JMIR Med Inform; 2024 Apr; 12():e55318. PubMed ID: 38587879
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Evaluation of GPT and BERT-based models on identifying proteinprotein interactions in biomedical text.
    Rehana H; Çam NB; Basmaci M; Zheng J; Jemiyo C; He Y; Özgür A; Hur J
    ArXiv; 2023 Dec; ():. PubMed ID: 38764593
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Generative large language models are all-purpose text analytics engines: text-to-text learning is all your need.
    Peng C; Yang X; Chen A; Yu Z; Smith KE; Costa AB; Flores MG; Bian J; Wu Y
    J Am Med Inform Assoc; 2024 Apr; ():. PubMed ID: 38630580
    [TBL] [Abstract][Full Text] [Related]  

  • 6. BioGPT: generative pre-trained transformer for biomedical text generation and mining.
    Luo R; Sun L; Xia Y; Qin T; Zhang S; Poon H; Liu TY
    Brief Bioinform; 2022 Nov; 23(6):. PubMed ID: 36156661
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A large language model-based generative natural language processing framework fine-tuned on clinical notes accurately extracts headache frequency from electronic health records.
    Chiang CC; Luo M; Dumkrieger G; Trivedi S; Chen YC; Chao CJ; Schwedt TJ; Sarker A; Banerjee I
    Headache; 2024 Apr; 64(4):400-409. PubMed ID: 38525734
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Using Large Language Models to Annotate Complex Cases of Social Determinants of Health in Longitudinal Clinical Records.
    Ralevski A; Taiyab N; Nossal M; Mico L; Piekos SN; Hadlock J
    medRxiv; 2024 Apr; ():. PubMed ID: 38712224
    [TBL] [Abstract][Full Text] [Related]  

  • 9. An evaluation of GPT models for phenotype concept recognition.
    Groza T; Caufield H; Gration D; Baynam G; Haendel MA; Robinson PN; Mungall CJ; Reese JT
    BMC Med Inform Decis Mak; 2024 Jan; 24(1):30. PubMed ID: 38297371
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Improving large language models for clinical named entity recognition via prompt engineering.
    Hu Y; Chen Q; Du J; Peng X; Keloth VK; Zuo X; Zhou Y; Li Z; Jiang X; Lu Z; Roberts K; Xu H
    J Am Med Inform Assoc; 2024 Jan; ():. PubMed ID: 38281112
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Automated Paper Screening for Clinical Reviews Using Large Language Models: Data Analysis Study.
    Guo E; Gupta M; Deng J; Park YJ; Paget M; Naugler C
    J Med Internet Res; 2024 Jan; 26():e48996. PubMed ID: 38214966
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Large Language Model-Based Generative Natural Language Processing Framework Finetuned on Clinical Notes Accurately Extracts Headache Frequency from Electronic Health Records.
    Chiang CC; Luo M; Dumkrieger G; Trivedi S; Chen YC; Chao CJ; Schwedt TJ; Sarker A; Banerjee I
    medRxiv; 2023 Oct; ():. PubMed ID: 37873417
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Can large language models replace humans in systematic reviews? Evaluating GPT-4's efficacy in screening and extracting data from peer-reviewed and grey literature in multiple languages.
    Khraisha Q; Put S; Kappenberg J; Warraitch A; Hadfield K
    Res Synth Methods; 2024 Mar; ():. PubMed ID: 38484744
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Prompt Tuning in Biomedical Relation Extraction.
    He J; Li F; Li J; Hu X; Nian Y; Xiang Y; Wang J; Wei Q; Li Y; Xu H; Tao C
    J Healthc Inform Res; 2024 Jun; 8(2):206-224. PubMed ID: 38681754
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Evaluating the ChatGPT family of models for biomedical reasoning and classification.
    Chen S; Li Y; Lu S; Van H; Aerts HJWL; Savova GK; Bitterman DS
    J Am Med Inform Assoc; 2024 Apr; 31(4):940-948. PubMed ID: 38261400
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Towards Improved Radiological Diagnostics: Investigating the Utility and Limitations of GPT-3.5 Turbo and GPT-4 with Quiz Cases.
    Kikuchi T; Nakao T; Nakamura Y; Hanaoka S; Mori H; Yoshikawa T
    AJNR Am J Neuroradiol; 2024 May; ():. PubMed ID: 38719605
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Performance analysis of large language models in the domain of legal argument mining.
    Al Zubaer A; Granitzer M; Mitrović J
    Front Artif Intell; 2023; 6():1278796. PubMed ID: 38045763
    [TBL] [Abstract][Full Text] [Related]  

  • 18. ChIP-GPT: a managed large language model for robust data extraction from biomedical database records.
    Cinquin O
    Brief Bioinform; 2024 Jan; 25(2):. PubMed ID: 38314912
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Chemical-Protein Relation Extraction with Pre-trained Prompt Tuning.
    He J; Li F; Hu X; Li J; Nian Y; Wang J; Xiang Y; Wei Q; Xu H; Tao C
    IEEE Int Conf Healthc Inform; 2022 Jun; 2022():608-609. PubMed ID: 37664001
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Leveraging GPT-4 for identifying cancer phenotypes in electronic health records: a performance comparison between GPT-4, GPT-3.5-turbo, Flan-T5, Llama-3-8B, and spaCy's rule-based and machine learning-based methods.
    Bhattarai K; Oh IY; Sierra JM; Tang J; Payne PRO; Abrams Z; Lai AM
    JAMIA Open; 2024 Oct; 7(3):ooae060. PubMed ID: 38962662
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.