These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
113 related articles for article (PubMed ID: 39389541)
1. ScholarGPT's performance in oral and maxillofacial surgery. Balel Y J Stomatol Oral Maxillofac Surg; 2024 Oct; ():102114. PubMed ID: 39389541 [TBL] [Abstract][Full Text] [Related]
2. Can ChatGPT-4o provide new systematic review ideas to oral and maxillofacial surgeons? Balel Y; Zogo A; Yıldız S; Tanyeri H J Stomatol Oral Maxillofac Surg; 2024 Oct; 125(5S2):101979. PubMed ID: 39068990 [TBL] [Abstract][Full Text] [Related]
3. Can ChatGPT be used in oral and maxillofacial surgery? Balel Y J Stomatol Oral Maxillofac Surg; 2023 Oct; 124(5):101471. PubMed ID: 37061037 [TBL] [Abstract][Full Text] [Related]
4. Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis. Liu M; Okuhara T; Chang X; Shirabe R; Nishiie Y; Okada H; Kiuchi T J Med Internet Res; 2024 Jul; 26():e60807. PubMed ID: 39052324 [TBL] [Abstract][Full Text] [Related]
5. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study. Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496 [TBL] [Abstract][Full Text] [Related]
6. Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI. Wang H; Wu W; Dou Z; He L; Yang L Int J Med Inform; 2023 Sep; 177():105173. PubMed ID: 37549499 [TBL] [Abstract][Full Text] [Related]
7. Artificial Intelligence in Ophthalmology: A Comparative Analysis of GPT-3.5, GPT-4, and Human Expertise in Answering StatPearls Questions. Moshirfar M; Altaf AW; Stoakes IM; Tuttle JJ; Hoopes PC Cureus; 2023 Jun; 15(6):e40822. PubMed ID: 37485215 [TBL] [Abstract][Full Text] [Related]
8. Is ChatGPT 'ready' to be a learning tool for medical undergraduates and will it perform equally in different subjects? Comparative study of ChatGPT performance in tutorial and case-based learning questions in physiology and biochemistry. Luke WANV; Seow Chong L; Ban KH; Wong AH; Zhi Xiong C; Shuh Shing L; Taneja R; Samarasekera DD; Yap CT Med Teach; 2024 Nov; 46(11):1441-1447. PubMed ID: 38295769 [TBL] [Abstract][Full Text] [Related]
9. Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis. Ye F; Zhang H; Luo X; Wu T; Yang Q; Shi Z Otolaryngol Head Neck Surg; 2024 Aug; 171(2):571-577. PubMed ID: 38796735 [TBL] [Abstract][Full Text] [Related]
10. Enhanced Artificial Intelligence Strategies in Renal Oncology: Iterative Optimization and Comparative Analysis of GPT 3.5 Versus 4.0. Liang R; Zhao A; Peng L; Xu X; Zhong J; Wu F; Yi F; Zhang S; Wu S; Hou J Ann Surg Oncol; 2024 Jun; 31(6):3887-3893. PubMed ID: 38472675 [TBL] [Abstract][Full Text] [Related]
11. ChatGPT's diagnostic performance based on textual vs. visual information compared to radiologists' diagnostic performance in musculoskeletal radiology. Horiuchi D; Tatekawa H; Oura T; Shimono T; Walston SL; Takita H; Matsushita S; Mitsuyama Y; Miki Y; Ueda D Eur Radiol; 2024 Jul; ():. PubMed ID: 38995378 [TBL] [Abstract][Full Text] [Related]
13. Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study. Yu P; Fang C; Liu X; Fu W; Ling J; Yan Z; Jiang Y; Cao Z; Wu M; Chen Z; Zhu W; Zhang Y; Abudukeremu A; Wang Y; Liu X; Wang J JMIR Med Educ; 2024 Feb; 10():e48514. PubMed ID: 38335017 [TBL] [Abstract][Full Text] [Related]
14. A Comparison Between GPT-3.5, GPT-4, and GPT-4V: Can the Large Language Model (ChatGPT) Pass the Japanese Board of Orthopaedic Surgery Examination? Nakajima N; Fujimori T; Furuya M; Kanie Y; Imai H; Kita K; Uemura K; Okada S Cureus; 2024 Mar; 16(3):e56402. PubMed ID: 38633935 [TBL] [Abstract][Full Text] [Related]
15. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment. Gilson A; Safranek CW; Huang T; Socrates V; Chi L; Taylor RA; Chartash D JMIR Med Educ; 2023 Feb; 9():e45312. PubMed ID: 36753318 [TBL] [Abstract][Full Text] [Related]
16. Evaluating ChatGPT's Ability to Solve Higher-Order Questions on the Competency-Based Medical Education Curriculum in Medical Biochemistry. Ghosh A; Bir A Cureus; 2023 Apr; 15(4):e37023. PubMed ID: 37143631 [TBL] [Abstract][Full Text] [Related]
17. Evaluating large language models on a highly-specialized topic, radiation oncology physics. Holmes J; Liu Z; Zhang L; Ding Y; Sio TT; McGee LA; Ashman JB; Li X; Liu T; Shen J; Liu W Front Oncol; 2023; 13():1219326. PubMed ID: 37529688 [TBL] [Abstract][Full Text] [Related]
18. Examining the Performance of ChatGPT 3.5 and Microsoft Copilot in Otolaryngology: A Comparative Study with Otolaryngologists' Evaluation. Mayo-Yáñez M; Lechien JR; Maria-Saibene A; Vaira LA; Maniaci A; Chiesa-Estomba CM Indian J Otolaryngol Head Neck Surg; 2024 Aug; 76(4):3465-3469. PubMed ID: 39130248 [TBL] [Abstract][Full Text] [Related]
19. Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments. Brin D; Sorin V; Vaid A; Soroush A; Glicksberg BS; Charney AW; Nadkarni G; Klang E Sci Rep; 2023 Oct; 13(1):16492. PubMed ID: 37779171 [TBL] [Abstract][Full Text] [Related]
20. Assessing Generative Pretrained Transformers (GPT) in Clinical Decision-Making: Comparative Analysis of GPT-3.5 and GPT-4. Lahat A; Sharif K; Zoabi N; Shneor Patt Y; Sharif Y; Fisher L; Shani U; Arow M; Levin R; Klang E J Med Internet Res; 2024 Jun; 26():e54571. PubMed ID: 38935937 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]