These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
206 related articles for article (PubMed ID: 37779171)
1. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Brin D; Sorin V; Vaid A; Soroush A; Glicksberg BS; Charney AW; Nadkarni G; Klang E Sci Rep; 2023 Oct; 13(1):16492. PubMed ID: 37779171 [TBL] [Abstract][Full Text] [Related]
2. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment. Gilson A; Safranek CW; Huang T; Socrates V; Chi L; Taylor RA; Chartash D JMIR Med Educ; 2023 Feb; 9():e45312. PubMed ID: 36753318 [TBL] [Abstract][Full Text] [Related]
3. Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis. Knoedler L; Alfertshofer M; Knoedler S; Hoch CC; Funk PF; Cotofana S; Maheta B; Frank K; Brébant V; Prantl L; Lamby P JMIR Med Educ; 2024 Jan; 10():e51148. PubMed ID: 38180782 [TBL] [Abstract][Full Text] [Related]
4. Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study. Haddad F; Saade JS JMIR Med Educ; 2024 Jan; 10():e50842. PubMed ID: 38236632 [TBL] [Abstract][Full Text] [Related]
5. Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study. Flores-Cohaila JA; García-Vicente A; Vizcarra-Jiménez SF; De la Cruz-Galán JP; Gutiérrez-Arratia JD; Quiroga Torres BG; Taype-Rondan A JMIR Med Educ; 2023 Sep; 9():e48039. PubMed ID: 37768724 [TBL] [Abstract][Full Text] [Related]
7. Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI. Wang H; Wu W; Dou Z; He L; Yang L Int J Med Inform; 2023 Sep; 177():105173. PubMed ID: 37549499 [TBL] [Abstract][Full Text] [Related]
8. In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions. Knoedler L; Knoedler S; Hoch CC; Prantl L; Frank K; Soiderer L; Cotofana S; Dorafshar AH; Schenck T; Vollbach F; Sofo G; Alfertshofer M Sci Rep; 2024 Jun; 14(1):13553. PubMed ID: 38866891 [TBL] [Abstract][Full Text] [Related]
9. ChatGPT-4: An assessment of an upgraded artificial intelligence chatbot in the United States Medical Licensing Examination. Mihalache A; Huang RS; Popovic MM; Muni RH Med Teach; 2024 Mar; 46(3):366-372. PubMed ID: 37839017 [TBL] [Abstract][Full Text] [Related]
10. Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis. Liu M; Okuhara T; Chang X; Shirabe R; Nishiie Y; Okada H; Kiuchi T J Med Internet Res; 2024 Jul; 26():e60807. PubMed ID: 39052324 [TBL] [Abstract][Full Text] [Related]
11. Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam. Su MC; Lin LE; Lin LH; Chen YC Int J Nurs Stud; 2024 May; 153():104717. PubMed ID: 38401366 [TBL] [Abstract][Full Text] [Related]
12. Critical Analysis of ChatGPT 4 Omni in USMLE Disciplines, Clinical Clerkships, and Clinical Skills. Bicknell BT; Butler D; Whalen S; Ricks J; Dixon CJ; Clark AB; Spaedy O; Skelton A; Edupuganti N; Dzubinski L; Tate H; Dyess G; Lindeman B; Lehmann LS JMIR Med Educ; 2024 Sep; ():. PubMed ID: 39276063 [TBL] [Abstract][Full Text] [Related]
13. Performance of ChatGPT-3.5 and GPT-4 in national licensing examinations for medicine, pharmacy, dentistry, and nursing: a systematic review and meta-analysis. Jin HK; Lee HE; Kim E BMC Med Educ; 2024 Sep; 24(1):1013. PubMed ID: 39285377 [TBL] [Abstract][Full Text] [Related]
14. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study. Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496 [TBL] [Abstract][Full Text] [Related]
15. Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study. Meyer A; Riese J; Streichert T JMIR Med Educ; 2024 Feb; 10():e50965. PubMed ID: 38329802 [TBL] [Abstract][Full Text] [Related]
16. Assessing ChatGPT 4.0's test performance and clinical diagnostic accuracy on USMLE STEP 2 CK and clinical case reports. Shieh A; Tran B; He G; Kumar M; Freed JA; Majety P Sci Rep; 2024 Apr; 14(1):9330. PubMed ID: 38654011 [TBL] [Abstract][Full Text] [Related]
17. Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study. Yu P; Fang C; Liu X; Fu W; Ling J; Yan Z; Jiang Y; Cao Z; Wu M; Chen Z; Zhu W; Zhang Y; Abudukeremu A; Wang Y; Liu X; Wang J JMIR Med Educ; 2024 Feb; 10():e48514. PubMed ID: 38335017 [TBL] [Abstract][Full Text] [Related]
18. GPT-4o vs. Human Candidates: Performance Analysis in the Polish Final Dentistry Examination. Jaworski A; Jasiński D; Sławińska B; Błecha Z; Jaworski W; Kruplewicz M; Jasińska N; Sysło O; Latkowska A; Jung M Cureus; 2024 Sep; 16(9):e68813. PubMed ID: 39371744 [TBL] [Abstract][Full Text] [Related]
19. Success of ChatGPT, an AI language model, in taking the French language version of the European Board of Ophthalmology examination: A novel approach to medical knowledge assessment. Panthier C; Gatinel D J Fr Ophtalmol; 2023 Sep; 46(7):706-711. PubMed ID: 37537126 [TBL] [Abstract][Full Text] [Related]
20. Enhanced Artificial Intelligence Strategies in Renal Oncology: Iterative Optimization and Comparative Analysis of GPT 3.5 Versus 4.0. Liang R; Zhao A; Peng L; Xu X; Zhong J; Wu F; Yi F; Zhang S; Wu S; Hou J Ann Surg Oncol; 2024 Jun; 31(6):3887-3893. PubMed ID: 38472675 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]