BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

132 related articles for article (PubMed ID: 38541171)

  • 1. Integrating Retrieval-Augmented Generation with Large Language Models in Nephrology: Advancing Practical Applications.
    Miao J; Thongprayoon C; Suppadungsuk S; Garcia Valencia OA; Cheungpasitporn W
    Medicina (Kaunas); 2024 Mar; 60(3):. PubMed ID: 38541171
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Large language models: a primer and gastroenterology applications.
    Shahab O; El Kurdi B; Shaukat A; Nadkarni G; Soroush A
    Therap Adv Gastroenterol; 2024; 17():17562848241227031. PubMed ID: 38390029
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimizing large language models in digestive disease: strategies and challenges to improve clinical outcomes.
    Giuffrè M; Kresevic S; Pugliese N; You K; Shung DL
    Liver Int; 2024 May; ():. PubMed ID: 38819632
    [TBL] [Abstract][Full Text] [Related]  

  • 4. GastroBot: a Chinese gastrointestinal disease chatbot based on the retrieval-augmented generation.
    Zhou Q; Liu C; Duan Y; Sun K; Li Y; Kan H; Gu Z; Shu J; Hu J
    Front Med (Lausanne); 2024; 11():1392555. PubMed ID: 38841582
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Emergency Patient Triage Improvement through a Retrieval-Augmented Generation Enhanced Large-Scale Language Model.
    Yazaki M; Maki S; Furuya T; Inoue K; Nagai K; Nagashima Y; Maruyama J; Toki Y; Kitagawa K; Iwata S; Kitamura T; Gushiken S; Noguchi Y; Inoue M; Shiga Y; Inage K; Orita S; Nakada T; Ohtori S
    Prehosp Emerg Care; 2024 Jul; ():1-13. PubMed ID: 38950135
    [No Abstract]   [Full Text] [Related]  

  • 6. ChatENT: Augmented Large Language Model for Expert Knowledge Retrieval in Otolaryngology-Head and Neck Surgery.
    Long C; Subburam D; Lowe K; Dos Santos A; Zhang J; Hwang S; Saduka N; Horev Y; Su T; Côté DWJ; Wright ED
    Otolaryngol Head Neck Surg; 2024 Jun; ():. PubMed ID: 38895862
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Integrating Large Language Models in Bioinformatics Education for Medical Students: Opportunities and Challenges.
    Kang K; Yang Y; Wu Y; Luo R
    Ann Biomed Eng; 2024 Jun; ():. PubMed ID: 38839663
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Evaluating and Enhancing Large Language Models' Performance in Domain-specific Medicine: Explainable LLM with DocOA.
    Chen X; Wang L; You M; Liu W; Fu Y; Xu J; Zhang S; Chen G; Li K; Li J
    J Med Internet Res; 2024 Jun; ():. PubMed ID: 38833165
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Improving medical reasoning through retrieval and self-reflection with retrieval-augmented large language models.
    Jeong M; Sohn J; Sung M; Kang J
    Bioinformatics; 2024 Jun; 40(Supplement_1):i119-i129. PubMed ID: 38940167
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Large language models and the future of rheumatology: assessing impact and emerging opportunities.
    Mannstadt I; Mehta B
    Curr Opin Rheumatol; 2024 Jan; 36(1):46-51. PubMed ID: 37729050
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Detecting hallucinations in large language models using semantic entropy.
    Farquhar S; Kossen J; Kuhn L; Gal Y
    Nature; 2024 Jun; 630(8017):625-630. PubMed ID: 38898292
    [TBL] [Abstract][Full Text] [Related]  

  • 12. KRAGEN: a knowledge graph-enhanced RAG framework for biomedical problem solving using large language models.
    Matsumoto N; Moran J; Choi H; Hernandez ME; Venkatesan M; Wang P; Moore JH
    Bioinformatics; 2024 Jun; 40(6):. PubMed ID: 38830083
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Maximising Large Language Model Utility in Cardiovascular Care: A Practical Guide.
    Nolin-Lapalme A; Theriault-Lauzier P; Corbin D; Tastet O; Sharma A; Hussin JG; Kadoury S; Jiang R; Krahn AD; Gallo R; Avram R
    Can J Cardiol; 2024 May; ():. PubMed ID: 38825181
    [TBL] [Abstract][Full Text] [Related]  

  • 14. OpenMedLM: prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models.
    Maharjan J; Garikipati A; Singh NP; Cyrus L; Sharma M; Ciobanu M; Barnes G; Thapa R; Mao Q; Das R
    Sci Rep; 2024 Jun; 14(1):14156. PubMed ID: 38898116
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Applying generative AI with retrieval augmented generation to summarize and extract key clinical information from electronic health records.
    Alkhalaf M; Yu P; Yin M; Deng C
    J Biomed Inform; 2024 Jun; 156():104662. PubMed ID: 38880236
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Zero-shot learning to extract assessment criteria and medical services from the preventive healthcare guidelines using large language models.
    Luo X; Tahabi FM; Marc T; Haunert LA; Storey S
    J Am Med Inform Assoc; 2024 Jun; ():. PubMed ID: 38900185
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Vision of the future: large language models in ophthalmology.
    Tailor PD; D'Souza HS; Li H; Starr MR
    Curr Opin Ophthalmol; 2024 May; ():. PubMed ID: 38814572
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Evaluation and mitigation of the limitations of large language models in clinical decision-making.
    Hager P; Jungmann F; Holland R; Bhagat K; Hubrecht I; Knauer M; Vielhauer J; Makowski M; Braren R; Kaissis G; Rueckert D
    Nat Med; 2024 Jul; ():. PubMed ID: 38965432
    [TBL] [Abstract][Full Text] [Related]  

  • 19. SPROUT: an Interactive Authoring Tool for Generating Programming Tutorials with the Visualization of Large Language Models.
    Liu Y; Wen Z; Weng L; Woodman O; Yang Y; Chen W
    IEEE Trans Vis Comput Graph; 2024 Jun; PP():. PubMed ID: 38875084
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Crowdsourcing with Enhanced Data Quality Assurance: An Efficient Approach to Mitigate Resource Scarcity Challenges in Training Large Language Models for Healthcare.
    Barai P; Leroy G; Bisht P; Rothman JM; Lee S; Andrews J; Rice SA; Ahmed A
    AMIA Jt Summits Transl Sci Proc; 2024; 2024():75-84. PubMed ID: 38827063
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.