BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

162 related articles for article (PubMed ID: 38633935)

  • 1. A Comparison Between GPT-3.5, GPT-4, and GPT-4V: Can the Large Language Model (ChatGPT) Pass the Japanese Board of Orthopaedic Surgery Examination?
    Nakajima N; Fujimori T; Furuya M; Kanie Y; Imai H; Kita K; Uemura K; Okada S
    Cureus; 2024 Mar; 16(3):e56402. PubMed ID: 38633935
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.
    Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T
    JMIR Med Educ; 2024 Mar; 10():e57054. PubMed ID: 38546736
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
    Massey PA; Montgomery C; Zhang AS
    J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study.
    Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M
    JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Performance of the Large Language Model ChatGPT on the National Nurse Examinations in Japan: Evaluation Study.
    Taira K; Itaya T; Hanada A
    JMIR Nurs; 2023 Jun; 6():e47305. PubMed ID: 37368470
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Exploring the Performance of ChatGPT Versions 3.5, 4, and 4 With Vision in the Chilean Medical Licensing Examination: Observational Study.
    Rojas M; Rojas M; Burgess V; Toro-Pérez J; Salehi S
    JMIR Med Educ; 2024 Apr; 10():e55048. PubMed ID: 38686550
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.
    Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O
    JMIR Med Educ; 2024 Mar; 10():e54393. PubMed ID: 38470459
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Can Artificial Intelligence Pass the American Board of Orthopaedic Surgery Examination? Orthopaedic Residents Versus ChatGPT.
    Lum ZC
    Clin Orthop Relat Res; 2023 Aug; 481(8):1623-1630. PubMed ID: 37220190
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Performance of Progressive Generations of GPT on an Exam Designed for Certifying Physicians as Certified Clinical Densitometrists.
    Valdez D; Bunnell A; Lim SY; Sadowski P; Shepherd JA
    J Clin Densitom; 2024; 27(2):101480. PubMed ID: 38401238
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Evaluating ChatGPT Performance on the Orthopaedic In-Training Examination.
    Kung JE; Marshall C; Gauthier C; Gonzalez TA; Jackson JB
    JB JS Open Access; 2023; 8(3):. PubMed ID: 37693092
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study.
    Flores-Cohaila JA; García-Vicente A; Vizcarra-Jiménez SF; De la Cruz-Galán JP; Gutiérrez-Arratia JD; Quiroga Torres BG; Taype-Rondan A
    JMIR Med Educ; 2023 Sep; 9():e48039. PubMed ID: 37768724
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study.
    Watari T; Takagi S; Sakaguchi K; Nishizaki Y; Shimizu T; Yamamoto Y; Tokuda Y
    JMIR Med Educ; 2023 Dec; 9():e52202. PubMed ID: 38055323
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Assessing the Performance of GPT-3.5 and GPT-4 on the 2023 Japanese Nursing Examination.
    Kaneda Y; Takahashi R; Kaneda U; Akashima S; Okita H; Misaki S; Yamashiro A; Ozaki A; Tanimoto T
    Cureus; 2023 Aug; 15(8):e42924. PubMed ID: 37667724
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study.
    Haddad F; Saade JS
    JMIR Med Educ; 2024 Jan; 10():e50842. PubMed ID: 38236632
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Comparing the Diagnostic Performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and Radiologists in Challenging Neuroradiology Cases.
    Horiuchi D; Tatekawa H; Oura T; Oue S; Walston SL; Takita H; Matsushita S; Mitsuyama Y; Shimono T; Miki Y; Ueda D
    Clin Neuroradiol; 2024 May; ():. PubMed ID: 38806794
    [TBL] [Abstract][Full Text] [Related]  

  • 16. The Rapid Development of Artificial Intelligence: GPT-4's Performance on Orthopedic Surgery Board Questions.
    Hofmann HL; Guerra GA; Le JL; Wong AM; Hofmann GH; Mayfield CK; Petrigliano FA; Liu JN
    Orthopedics; 2024; 47(2):e85-e89. PubMed ID: 37757748
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Could ChatGPT Pass the UK Radiology Fellowship Examinations?
    Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R
    Acad Radiol; 2024 May; 31(5):2178-2182. PubMed ID: 38160089
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.
    Ali R; Tang OY; Connolly ID; Zadnik Sullivan PL; Shin JH; Fridley JS; Asaad WF; Cielo D; Oyelese AA; Doberstein CE; Gokaslan ZL; Telfeian AE
    Neurosurgery; 2023 Dec; 93(6):1353-1365. PubMed ID: 37581444
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study.
    Meyer A; Riese J; Streichert T
    JMIR Med Educ; 2024 Feb; 10():e50965. PubMed ID: 38329802
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Artificial Intelligence in Childcare: Assessing the Performance and Acceptance of ChatGPT Responses.
    Kaneda Y; Namba M; Kaneda U; Tanimoto T
    Cureus; 2023 Aug; 15(8):e44484. PubMed ID: 37791148
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.