These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Journal Abstract Search
287 related items for PubMed ID: 37209880
1. ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training. Deebel NA, Terlecki R. Urology; 2023 Jul; 177():29-33. PubMed ID: 37209880 [Abstract] [Full Text] [Related]
2. Comparison of Gemini Advanced and ChatGPT 4.0's Performances on the Ophthalmology Resident Ophthalmic Knowledge Assessment Program (OKAP) Examination Review Question Banks. Gill GS, Tsai J, Moxam J, Sanghvi HA, Gupta S. Cureus; 2024 Sep; 16(9):e69612. PubMed ID: 39421095 [Abstract] [Full Text] [Related]
3. New Artificial Intelligence ChatGPT Performs Poorly on the 2022 Self-assessment Study Program for Urology. Huynh LM, Bonebrake BT, Schultis K, Quach A, Deibert CM. Urol Pract; 2023 Jul; 10(4):409-415. PubMed ID: 37276372 [Abstract] [Full Text] [Related]
6. Can Artificial Intelligence Pass the American Board of Orthopaedic Surgery Examination? Orthopaedic Residents Versus ChatGPT. Lum ZC. Clin Orthop Relat Res; 2023 Aug 01; 481(8):1623-1630. PubMed ID: 37220190 [Abstract] [Full Text] [Related]
8. Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance. Şahin B, Emre Genç Y, Doğan K, Emre Şener T, Şekerci ÇA, Tanıdır Y, Yücel S, Tarcan T, Çam HK. J Endourol; 2024 Aug 01; 38(8):799-808. PubMed ID: 38815140 [Abstract] [Full Text] [Related]
16. Comprehensive analysis of the performance of GPT-3.5 and GPT-4 on the American Urological Association self-assessment study program exams from 2012-2023. Sherazi A, Canes D. Can Urol Assoc J; 2023 Dec 21. PubMed ID: 38381942 [Abstract] [Full Text] [Related]
17. The Accuracy of Artificial Intelligence ChatGPT in Oncology Examination Questions. Chow R, Hasan S, Zheng A, Gao C, Valdes G, Yu F, Chhabra A, Raman S, Choi JI, Lin H, Simone CB. J Am Coll Radiol; 2024 Nov 21; 21(11):1800-1804. PubMed ID: 39098369 [Abstract] [Full Text] [Related]
18. Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank. Lee GU, Hong DY, Kim SY, Kim JW, Lee YH, Park SO, Lee KR. Medicine (Baltimore); 2024 Mar 01; 103(9):e37325. PubMed ID: 38428889 [Abstract] [Full Text] [Related]
19. The performance of artificial intelligence language models in board-style dental knowledge assessment: A preliminary study on ChatGPT. Danesh A, Pazouki H, Danesh K, Danesh F, Danesh A. J Am Dent Assoc; 2023 Nov 01; 154(11):970-974. PubMed ID: 37676187 [Abstract] [Full Text] [Related]
20. Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum. Das D, Kumar N, Longjam LA, Sinha R, Deb Roy A, Mondal H, Gupta P. Cureus; 2023 Mar 01; 15(3):e36034. PubMed ID: 37056538 [Abstract] [Full Text] [Related] Page: [Next] [New Search]