521 related articles for article (PubMed ID: 38098921)
1. Stratified Evaluation of GPT's Question Answering in Surgery Reveals Artificial Intelligence (AI) Knowledge Gaps.
Murphy Lonergan R; Curry J; Dhas K; Simmons BI
Cureus; 2023 Nov; 15(11):e48788. PubMed ID: 38098921
[TBL] [Abstract][Full Text] [Related]
2. Evaluating Large Language Models for the National Premedical Exam in India: Comparative Analysis of GPT-3.5, GPT-4, and Bard.
Farhat F; Chaudhry BM; Nadeem M; Sohail SS; Madsen DØ
JMIR Med Educ; 2024 Feb; 10():e51523. PubMed ID: 38381486
[TBL] [Abstract][Full Text] [Related]
3. Performance of Progressive Generations of GPT on an Exam Designed for Certifying Physicians as Certified Clinical Densitometrists.
Valdez D; Bunnell A; Lim SY; Sadowski P; Shepherd JA
J Clin Densitom; 2024; 27(2):101480. PubMed ID: 38401238
[TBL] [Abstract][Full Text] [Related]
4. Learning to Make Rare and Complex Diagnoses With Generative AI Assistance: Qualitative Study of Popular Large Language Models.
Abdullahi T; Singh R; Eickhoff C
JMIR Med Educ; 2024 Feb; 10():e51391. PubMed ID: 38349725
[TBL] [Abstract][Full Text] [Related]
5. Comparing the Performance of Popular Large Language Models on the National Board of Medical Examiners Sample Questions.
Abbas A; Rehman MS; Rehman SS
Cureus; 2024 Mar; 16(3):e55991. PubMed ID: 38606229
[TBL] [Abstract][Full Text] [Related]
6. Artificial Intelligence in Ophthalmology: A Comparative Analysis of GPT-3.5, GPT-4, and Human Expertise in Answering StatPearls Questions.
Moshirfar M; Altaf AW; Stoakes IM; Tuttle JJ; Hoopes PC
Cureus; 2023 Jun; 15(6):e40822. PubMed ID: 37485215
[TBL] [Abstract][Full Text] [Related]
7. Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.
Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T
JMIR Med Educ; 2024 Mar; 10():e57054. PubMed ID: 38546736
[TBL] [Abstract][Full Text] [Related]
8. Quality of Answers of Generative Large Language Models Versus Peer Users for Interpreting Laboratory Test Results for Lay Patients: Evaluation Study.
He Z; Bhasuran B; Jin Q; Tian S; Hanna K; Shavor C; Arguello LG; Murray P; Lu Z
J Med Internet Res; 2024 Apr; 26():e56655. PubMed ID: 38630520
[TBL] [Abstract][Full Text] [Related]
9. GPT-4 Artificial Intelligence Model Outperforms ChatGPT, Medical Students, and Neurosurgery Residents on Neurosurgery Written Board-Like Questions.
Guerra GA; Hofmann H; Sobhani S; Hofmann G; Gomez D; Soroudi D; Hopkins BS; Dallas J; Pangal DJ; Cheok S; Nguyen VN; Mack WJ; Zada G
World Neurosurg; 2023 Nov; 179():e160-e165. PubMed ID: 37597659
[TBL] [Abstract][Full Text] [Related]
10. Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study.
Wilhelm TI; Roos J; Kaczmarczyk R
J Med Internet Res; 2023 Oct; 25():e49324. PubMed ID: 37902826
[TBL] [Abstract][Full Text] [Related]
11. The performance of large language models in intercollegiate Membership of the Royal College of Surgeons examination.
Chan J; Dong T; Angelini GD
Ann R Coll Surg Engl; 2024 Mar; ():. PubMed ID: 38445611
[TBL] [Abstract][Full Text] [Related]
12. Generative pretrained transformer-4, an artificial intelligence text predictive model, has a high capability for passing novel written radiology exam questions.
Sood A; Mansoor N; Memmi C; Lynch M; Lynch J
Int J Comput Assist Radiol Surg; 2024 Apr; 19(4):645-653. PubMed ID: 38381363
[TBL] [Abstract][Full Text] [Related]
13. Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.
Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O
JMIR Med Educ; 2024 Mar; 10():e54393. PubMed ID: 38470459
[TBL] [Abstract][Full Text] [Related]
14. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.
Ali R; Tang OY; Connolly ID; Zadnik Sullivan PL; Shin JH; Fridley JS; Asaad WF; Cielo D; Oyelese AA; Doberstein CE; Gokaslan ZL; Telfeian AE
Neurosurgery; 2023 Dec; 93(6):1353-1365. PubMed ID: 37581444
[TBL] [Abstract][Full Text] [Related]
15. Evaluating the Artificial Intelligence Performance Growth in Ophthalmic Knowledge.
Jiao C; Edupuganti NR; Patel PA; Bui T; Sheth V
Cureus; 2023 Sep; 15(9):e45700. PubMed ID: 37868408
[TBL] [Abstract][Full Text] [Related]
16. The performance of ChatGPT on orthopaedic in-service training exams: A comparative study of the GPT-3.5 turbo and GPT-4 models in orthopaedic education.
Rizzo MG; Cai N; Constantinescu D
J Orthop; 2024 Apr; 50():70-75. PubMed ID: 38173829
[TBL] [Abstract][Full Text] [Related]
17. Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study.
Giannakopoulos K; Kavadella A; Aaqel Salim A; Stamatopoulos V; Kaklamanos EG
J Med Internet Res; 2023 Dec; 25():e51580. PubMed ID: 38009003
[TBL] [Abstract][Full Text] [Related]
18. Evaluating the Efficacy of ChatGPT in Navigating the Spanish Medical Residency Entrance Examination (MIR): Promising Horizons for AI in Clinical Medicine.
Guillen-Grima F; Guillen-Aguinaga S; Guillen-Aguinaga L; Alas-Brun R; Onambele L; Ortega W; Montejo R; Aguinaga-Ontoso E; Barach P; Aguinaga-Ontoso I
Clin Pract; 2023 Nov; 13(6):1460-1487. PubMed ID: 37987431
[TBL] [Abstract][Full Text] [Related]
19. Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study.
Rao A; Pang M; Kim J; Kamineni M; Lie W; Prasad AK; Landman A; Dreyer K; Succi MD
J Med Internet Res; 2023 Aug; 25():e48659. PubMed ID: 37606976
[TBL] [Abstract][Full Text] [Related]
20. The Potential of GPT-4 as a Support Tool for Pharmacists: Analytical Study Using the Japanese National Examination for Pharmacists.
Kunitsu Y
JMIR Med Educ; 2023 Oct; 9():e48452. PubMed ID: 37837968
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]