These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
168 related articles for article (PubMed ID: 39225605)
1. Performance of GPT-4 with Vision on Text- and Image-based ACR Diagnostic Radiology In-Training Examination Questions. Hayden N; Gilbert S; Poisson LM; Griffith B; Klochko C Radiology; 2024 Sep; 312(3):e240153. PubMed ID: 39225605 [TBL] [Abstract][Full Text] [Related]
2. Evaluation of GPT Large Language Model Performance on RSNA 2023 Case of the Day Questions. Mukherjee P; Hou B; Suri A; Zhuang Y; Parnell C; Lee N; Stroie O; Jain R; Wang KC; Sharma K; Summers RM Radiology; 2024 Oct; 313(1):e240609. PubMed ID: 39352277 [TBL] [Abstract][Full Text] [Related]
3. ChatGPT's diagnostic performance based on textual vs. visual information compared to radiologists' diagnostic performance in musculoskeletal radiology. Horiuchi D; Tatekawa H; Oura T; Shimono T; Walston SL; Takita H; Matsushita S; Mitsuyama Y; Miki Y; Ueda D Eur Radiol; 2025 Jan; 35(1):506-516. PubMed ID: 38995378 [TBL] [Abstract][Full Text] [Related]
4. Performance of GPT-4 on the American College of Radiology In-training Examination: Evaluating Accuracy, Model Drift, and Fine-tuning. Payne DL; Purohit K; Borrero WM; Chung K; Hao M; Mpoy M; Jin M; Prasanna P; Hill V Acad Radiol; 2024 Jul; 31(7):3046-3054. PubMed ID: 38653599 [TBL] [Abstract][Full Text] [Related]
5. Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study. Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T JMIR Med Educ; 2024 Mar; 10():e57054. PubMed ID: 38546736 [TBL] [Abstract][Full Text] [Related]
7. Could ChatGPT Pass the UK Radiology Fellowship Examinations? Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R Acad Radiol; 2024 May; 31(5):2178-2182. PubMed ID: 38160089 [TBL] [Abstract][Full Text] [Related]
8. GPT-4 Turbo with Vision fails to outperform text-only GPT-4 Turbo in the Japan Diagnostic Radiology Board Examination. Hirano Y; Hanaoka S; Nakao T; Miki S; Kikuchi T; Nakamura Y; Nomura Y; Yoshikawa T; Abe O Jpn J Radiol; 2024 Aug; 42(8):918-926. PubMed ID: 38733472 [TBL] [Abstract][Full Text] [Related]
9. Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis. Liu M; Okuhara T; Chang X; Shirabe R; Nishiie Y; Okada H; Kiuchi T J Med Internet Res; 2024 Jul; 26():e60807. PubMed ID: 39052324 [TBL] [Abstract][Full Text] [Related]
10. Comparing the Diagnostic Performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and Radiologists in Challenging Neuroradiology Cases. Horiuchi D; Tatekawa H; Oura T; Oue S; Walston SL; Takita H; Matsushita S; Mitsuyama Y; Shimono T; Miki Y; Ueda D Clin Neuroradiol; 2024 Dec; 34(4):779-787. PubMed ID: 38806794 [TBL] [Abstract][Full Text] [Related]
11. Exploring the Performance of ChatGPT Versions 3.5, 4, and 4 With Vision in the Chilean Medical Licensing Examination: Observational Study. Rojas M; Rojas M; Burgess V; Toro-Pérez J; Salehi S JMIR Med Educ; 2024 Apr; 10():e55048. PubMed ID: 38686550 [TBL] [Abstract][Full Text] [Related]
12. Evaluation of Reliability, Repeatability, Robustness, and Confidence of GPT-3.5 and GPT-4 on a Radiology Board-style Examination. Krishna S; Bhambra N; Bleakney R; Bhayana R Radiology; 2024 May; 311(2):e232715. PubMed ID: 38771184 [TBL] [Abstract][Full Text] [Related]
13. Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study. Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O JMIR Med Educ; 2024 Mar; 10():e54393. PubMed ID: 38470459 [TBL] [Abstract][Full Text] [Related]
14. Artificial Intelligence in Orthopaedics: Performance of ChatGPT on Text and Image Questions on a Complete AAOS Orthopaedic In-Training Examination (OITE). Hayes DS; Foster BK; Makar G; Manzar S; Ozdag Y; Shultz M; Klena JC; Grandizio LC J Surg Educ; 2024 Nov; 81(11):1645-1649. PubMed ID: 39284250 [TBL] [Abstract][Full Text] [Related]
15. A Comparison Between GPT-3.5, GPT-4, and GPT-4V: Can the Large Language Model (ChatGPT) Pass the Japanese Board of Orthopaedic Surgery Examination? Nakajima N; Fujimori T; Furuya M; Kanie Y; Imai H; Kita K; Uemura K; Okada S Cureus; 2024 Mar; 16(3):e56402. PubMed ID: 38633935 [TBL] [Abstract][Full Text] [Related]
16. Influence of Model Evolution and System Roles on ChatGPT's Performance in Chinese Medical Licensing Exams: Comparative Study. Ming S; Guo Q; Cheng W; Lei B JMIR Med Educ; 2024 Aug; 10():e52784. PubMed ID: 39140269 [TBL] [Abstract][Full Text] [Related]
17. Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations. Almeida LC; Farina EMJM; Kuriki PEA; Abdala N; Kitamura FC Radiol Artif Intell; 2024 Jan; 6(1):e230103. PubMed ID: 38294325 [TBL] [Abstract][Full Text] [Related]
18. Evaluating GPT-V4 (GPT-4 with Vision) on Detection of Radiologic Findings on Chest Radiographs. Zhou Y; Ong H; Kennedy P; Wu CC; Kazam J; Hentel K; Flanders A; Shih G; Peng Y Radiology; 2024 May; 311(2):e233270. PubMed ID: 38713028 [TBL] [Abstract][Full Text] [Related]
19. Encouragement vs. liability: How prompt engineering influences ChatGPT-4's radiology exam performance. Nguyen D; MacKenzie A; Kim YH Clin Imaging; 2024 Nov; 115():110276. PubMed ID: 39288636 [TBL] [Abstract][Full Text] [Related]
20. Performance of ChatGPT on American Board of Surgery In-Training Examination Preparation Questions. Tran CG; Chang J; Sherman SK; De Andrade JP J Surg Res; 2024 Jul; 299():329-335. PubMed ID: 38788470 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]