149 related articles for article (PubMed ID: 38733472)
1. GPT-4 Turbo with Vision fails to outperform text-only GPT-4 Turbo in the Japan Diagnostic Radiology Board Examination.
Hirano Y; Hanaoka S; Nakao T; Miki S; Kikuchi T; Nakamura Y; Nomura Y; Yoshikawa T; Abe O
Jpn J Radiol; 2024 May; ():. PubMed ID: 38733472
[TBL] [Abstract][Full Text] [Related]
2. Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.
Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T
JMIR Med Educ; 2024 Mar; 10():e57054. PubMed ID: 38546736
[TBL] [Abstract][Full Text] [Related]
3. Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society.
Toyama Y; Harigai A; Abe M; Nagano M; Kawabata M; Seki Y; Takase K
Jpn J Radiol; 2024 Feb; 42(2):201-207. PubMed ID: 37792149
[TBL] [Abstract][Full Text] [Related]
4. Comparing the Diagnostic Performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and Radiologists in Challenging Neuroradiology Cases.
Horiuchi D; Tatekawa H; Oura T; Oue S; Walston SL; Takita H; Matsushita S; Mitsuyama Y; Shimono T; Miki Y; Ueda D
Clin Neuroradiol; 2024 May; ():. PubMed ID: 38806794
[TBL] [Abstract][Full Text] [Related]
5. GPT-4 turbo with vision fails to outperform text-only GPT-4 turbo in the Japan diagnostic radiology board examination: correspondence.
Kleebayoon A; Wiwanitkit V
Jpn J Radiol; 2024 May; ():. PubMed ID: 38771502
[No Abstract] [Full Text] [Related]
6. Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.
Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O
JMIR Med Educ; 2024 Mar; 10():e54393. PubMed ID: 38470459
[TBL] [Abstract][Full Text] [Related]
7. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study.
Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M
JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496
[TBL] [Abstract][Full Text] [Related]
8. Towards Improved Radiological Diagnostics: Investigating the Utility and Limitations of GPT-3.5 Turbo and GPT-4 with Quiz Cases.
Kikuchi T; Nakao T; Nakamura Y; Hanaoka S; Mori H; Yoshikawa T
AJNR Am J Neuroradiol; 2024 May; ():. PubMed ID: 38719605
[TBL] [Abstract][Full Text] [Related]
9. The performance of ChatGPT on orthopaedic in-service training exams: A comparative study of the GPT-3.5 turbo and GPT-4 models in orthopaedic education.
Rizzo MG; Cai N; Constantinescu D
J Orthop; 2024 Apr; 50():70-75. PubMed ID: 38173829
[TBL] [Abstract][Full Text] [Related]
10. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
Massey PA; Montgomery C; Zhang AS
J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
[TBL] [Abstract][Full Text] [Related]
11. Comparing the Performance of Popular Large Language Models on the National Board of Medical Examiners Sample Questions.
Abbas A; Rehman MS; Rehman SS
Cureus; 2024 Mar; 16(3):e55991. PubMed ID: 38606229
[TBL] [Abstract][Full Text] [Related]
12. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.
Ali R; Tang OY; Connolly ID; Zadnik Sullivan PL; Shin JH; Fridley JS; Asaad WF; Cielo D; Oyelese AA; Doberstein CE; Gokaslan ZL; Telfeian AE
Neurosurgery; 2023 Dec; 93(6):1353-1365. PubMed ID: 37581444
[TBL] [Abstract][Full Text] [Related]
13. The Rapid Development of Artificial Intelligence: GPT-4's Performance on Orthopedic Surgery Board Questions.
Hofmann HL; Guerra GA; Le JL; Wong AM; Hofmann GH; Mayfield CK; Petrigliano FA; Liu JN
Orthopedics; 2024; 47(2):e85-e89. PubMed ID: 37757748
[TBL] [Abstract][Full Text] [Related]
14. Performance of Progressive Generations of GPT on an Exam Designed for Certifying Physicians as Certified Clinical Densitometrists.
Valdez D; Bunnell A; Lim SY; Sadowski P; Shepherd JA
J Clin Densitom; 2024; 27(2):101480. PubMed ID: 38401238
[TBL] [Abstract][Full Text] [Related]
15. Evaluation of Reliability, Repeatability, Robustness, and Confidence of GPT-3.5 and GPT-4 on a Radiology Board-style Examination.
Krishna S; Bhambra N; Bleakney R; Bhayana R
Radiology; 2024 May; 311(2):e232715. PubMed ID: 38771184
[TBL] [Abstract][Full Text] [Related]
16. A Comparison Between GPT-3.5, GPT-4, and GPT-4V: Can the Large Language Model (ChatGPT) Pass the Japanese Board of Orthopaedic Surgery Examination?
Nakajima N; Fujimori T; Furuya M; Kanie Y; Imai H; Kita K; Uemura K; Okada S
Cureus; 2024 Mar; 16(3):e56402. PubMed ID: 38633935
[TBL] [Abstract][Full Text] [Related]
17. Artificial Intelligence for Anesthesiology Board-Style Examination Questions: Role of Large Language Models.
Khan AA; Yunus R; Sohail M; Rehman TA; Saeed S; Bu Y; Jackson CD; Sharkey A; Mahmood F; Matyal R
J Cardiothorac Vasc Anesth; 2024 May; 38(5):1251-1259. PubMed ID: 38423884
[TBL] [Abstract][Full Text] [Related]
18. Generative pretrained transformer-4, an artificial intelligence text predictive model, has a high capability for passing novel written radiology exam questions.
Sood A; Mansoor N; Memmi C; Lynch M; Lynch J
Int J Comput Assist Radiol Surg; 2024 Apr; 19(4):645-653. PubMed ID: 38381363
[TBL] [Abstract][Full Text] [Related]
19. Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study.
Watari T; Takagi S; Sakaguchi K; Nishizaki Y; Shimizu T; Yamamoto Y; Tokuda Y
JMIR Med Educ; 2023 Dec; 9():e52202. PubMed ID: 38055323
[TBL] [Abstract][Full Text] [Related]
20. Stratified Evaluation of GPT's Question Answering in Surgery Reveals Artificial Intelligence (AI) Knowledge Gaps.
Murphy Lonergan R; Curry J; Dhas K; Simmons BI
Cureus; 2023 Nov; 15(11):e48788. PubMed ID: 38098921
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]