These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

249 related articles for article (PubMed ID: 39121174)

  • 1. Evaluating large language models for health-related text classification tasks with public social media data.
    Guo Y; Ovadje A; Al-Garadi MA; Sarker A
    J Am Med Inform Assoc; 2024 Oct; 31(10):2181-2189. PubMed ID: 39121174
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Exploring Large Language Models for Detecting Online Vaccine Reactions.
    Khademi S; Palmer C; Dimaguila GL; Javed M; Buttery J
    Stud Health Technol Inform; 2024 Sep; 318():30-35. PubMed ID: 39320177
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Large Language Models Can Enable Inductive Thematic Analysis of a Social Media Corpus in a Single Prompt: Human Validation Study.
    Deiner MS; Honcharov V; Li J; Mackey TK; Porco TC; Sarkar U
    JMIR Infodemiology; 2024 Aug; 4():e59641. PubMed ID: 39207842
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing: Algorithm Development and Validation Study.
    Sivarajkumar S; Kelley M; Samolyk-Mazzanti A; Visweswaran S; Wang Y
    JMIR Med Inform; 2024 Apr; 12():e55318. PubMed ID: 38587879
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A comparative study of large language model-based zero-shot inference and task-specific supervised classification of breast cancer pathology reports.
    Sushil M; Zack T; Mandair D; Zheng Z; Wali A; Yu YN; Quan Y; Lituiev D; Butte AJ
    J Am Med Inform Assoc; 2024 Oct; 31(10):2315-2327. PubMed ID: 38900207
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A comprehensive evaluation of large Language models on benchmark biomedical text processing tasks.
    Jahan I; Laskar MTR; Peng C; Huang JX
    Comput Biol Med; 2024 Mar; 171():108189. PubMed ID: 38447502
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Potential of Large Language Models in Health Care: Delphi Study.
    Denecke K; May R; ; Rivera Romero O
    J Med Internet Res; 2024 May; 26():e52399. PubMed ID: 38739445
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Few-Shot Learning for Clinical Natural Language Processing Using Siamese Neural Networks: Algorithm Development and Validation Study.
    Oniani D; Chandrasekar P; Sivarajkumar S; Wang Y
    JMIR AI; 2023 May; 2():e44293. PubMed ID: 38875537
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study.
    Majdik ZP; Graham SS; Shiva Edward JC; Rodriguez SN; Karnes MS; Jensen JT; Barbour JB; Rousseau JF
    JMIR AI; 2024 May; 3():e52095. PubMed ID: 38875593
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks.
    Luo L; Ning J; Zhao Y; Wang Z; Ding Z; Chen P; Fu W; Han Q; Xu G; Qiu Y; Pan D; Li J; Li H; Feng W; Tu S; Liu Y; Yang Z; Wang J; Sun Y; Lin H
    J Am Med Inform Assoc; 2024 Sep; 31(9):1865-1874. PubMed ID: 38422367
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A comparative study of zero-shot inference with large language models and supervised modeling in breast cancer pathology classification.
    Sushil M; Zack T; Mandair D; Zheng Z; Wali A; Yu YN; Quan Y; Butte AJ
    Res Sq; 2024 Feb; ():. PubMed ID: 38405831
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Use of SNOMED CT in Large Language Models: Scoping Review.
    Chang E; Sung S
    JMIR Med Inform; 2024 Oct; 12():e62924. PubMed ID: 39374057
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Large language models for biomedicine: foundations, opportunities, challenges, and best practices.
    Sahoo SS; Plasek JM; Xu H; Uzuner Ö; Cohen T; Yetisgen M; Liu H; Meystre S; Wang Y
    J Am Med Inform Assoc; 2024 Sep; 31(9):2114-2124. PubMed ID: 38657567
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Developing large language models to detect adverse drug events in posts on x.
    Deng Y; Xing Y; Quach J; Chen X; Wu X; Zhang Y; Moureaud C; Yu M; Zhao Y; Wang L; Zhong S
    J Biopharm Stat; 2024 Sep; ():1-12. PubMed ID: 39300965
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Unlocking the Secrets Behind Advanced Artificial Intelligence Language Models in Deidentifying Chinese-English Mixed Clinical Text: Development and Validation Study.
    Lee YQ; Chen CT; Chen CC; Lee CH; Chen P; Wu CS; Dai HJ
    J Med Internet Res; 2024 Jan; 26():e48443. PubMed ID: 38271060
    [TBL] [Abstract][Full Text] [Related]  

  • 16. The first step is the hardest: pitfalls of representing and tokenizing temporal data for large language models.
    Spathis D; Kawsar F
    J Am Med Inform Assoc; 2024 Sep; 31(9):2151-2158. PubMed ID: 38950417
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Generalizable clinical note section identification with large language models.
    Zhou W; Miller TA
    JAMIA Open; 2024 Oct; 7(3):ooae075. PubMed ID: 39139700
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Advancing entity recognition in biomedicine via instruction tuning of large language models.
    Keloth VK; Hu Y; Xie Q; Peng X; Wang Y; Zheng A; Selek M; Raja K; Wei CH; Jin Q; Lu Z; Chen Q; Xu H
    Bioinformatics; 2024 Mar; 40(4):. PubMed ID: 38514400
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Comparison of Pretraining Models and Strategies for Health-Related Social Media Text Classification.
    Guo Y; Ge Y; Yang YC; Al-Garadi MA; Sarker A
    Healthcare (Basel); 2022 Aug; 10(8):. PubMed ID: 36011135
    [TBL] [Abstract][Full Text] [Related]  

  • 20. GPT-4 as an X data annotator: Unraveling its performance on a stance classification task.
    Liyanage CR; Gokani R; Mago V
    PLoS One; 2024; 19(8):e0307741. PubMed ID: 39146280
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 13.