BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 38648547)

  • 1. GPT-4/4V's performance on the Japanese National Medical Licensing Examination.
    Kawahara T; Sumi Y
    Med Teach; 2024 Apr; ():1-8. PubMed ID: 38648547
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study.
    Nakao T; Miki S; Nakamura Y; Kikuchi T; Nomura Y; Hanaoka S; Yoshikawa T; Abe O
    JMIR Med Educ; 2024 Mar; 10():e54393. PubMed ID: 38470459
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study.
    Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M
    JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Performance of Generative Pretrained Transformer on the National Medical Licensing Examination in Japan.
    Tanaka Y; Nakata T; Aiga K; Etani T; Muramatsu R; Katagiri S; Kawai H; Higashino F; Enomoto M; Noda M; Kometani M; Takamura M; Yoneda T; Kakizaki H; Nomura A
    PLOS Digit Health; 2024 Jan; 3(1):e0000433. PubMed ID: 38261580
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Comparison Between GPT-3.5, GPT-4, and GPT-4V: Can the Large Language Model (ChatGPT) Pass the Japanese Board of Orthopaedic Surgery Examination?
    Nakajima N; Fujimori T; Furuya M; Kanie Y; Imai H; Kita K; Uemura K; Okada S
    Cureus; 2024 Mar; 16(3):e56402. PubMed ID: 38633935
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Performance of GPT-3.5 and GPT-4 on the Japanese Medical Licensing Examination: Comparison Study.
    Takagi S; Watari T; Erabi A; Sakaguchi K
    JMIR Med Educ; 2023 Jun; 9():e48002. PubMed ID: 37384388
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study.
    Flores-Cohaila JA; García-Vicente A; Vizcarra-Jiménez SF; De la Cruz-Galán JP; Gutiérrez-Arratia JD; Quiroga Torres BG; Taype-Rondan A
    JMIR Med Educ; 2023 Sep; 9():e48039. PubMed ID: 37768724
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
    Massey PA; Montgomery C; Zhang AS
    J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Performance of GPT-4V in Answering the Japanese Otolaryngology Board Certification Examination Questions: Evaluation Study.
    Noda M; Ueno T; Koshu R; Takaso Y; Shimada MD; Saito C; Sugimoto H; Fushiki H; Ito M; Nomura A; Yoshizaki T
    JMIR Med Educ; 2024 Mar; 10():e57054. PubMed ID: 38546736
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Integrating Text and Image Analysis: Exploring GPT-4V's Capabilities in Advanced Radiological Applications Across Subspecialties.
    Busch F; Han T; Makowski MR; Truhn D; Bressem KK; Adams L
    J Med Internet Res; 2024 May; 26():e54948. PubMed ID: 38691404
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Assessing the Performance of GPT-3.5 and GPT-4 on the 2023 Japanese Nursing Examination.
    Kaneda Y; Takahashi R; Kaneda U; Akashima S; Okita H; Misaki S; Yamashiro A; Ozaki A; Tanimoto T
    Cureus; 2023 Aug; 15(8):e42924. PubMed ID: 37667724
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Performance of the Large Language Model ChatGPT on the National Nurse Examinations in Japan: Evaluation Study.
    Taira K; Itaya T; Hanada A
    JMIR Nurs; 2023 Jun; 6():e47305. PubMed ID: 37368470
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The Potential of GPT-4 as a Support Tool for Pharmacists: Analytical Study Using the Japanese National Examination for Pharmacists.
    Kunitsu Y
    JMIR Med Educ; 2023 Oct; 9():e48452. PubMed ID: 37837968
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Success of ChatGPT, an AI language model, in taking the French language version of the European Board of Ophthalmology examination: A novel approach to medical knowledge assessment.
    Panthier C; Gatinel D
    J Fr Ophtalmol; 2023 Sep; 46(7):706-711. PubMed ID: 37537126
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Hidden Flaws Behind Expert-Level Accuracy of Multimodal GPT-4 Vision in Medicine.
    Jin Q; Chen F; Zhou Y; Xu Z; Cheung JM; Chen R; Summers RM; Rousseau JF; Ni P; Landsman MJ; Baxter SL; Al'Aref SJ; Li Y; Chen A; Brejt JA; Chiang MF; Peng Y; Lu Z
    ArXiv; 2024 Apr; ():. PubMed ID: 38410646
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI.
    Wang H; Wu W; Dou Z; He L; Yang L
    Int J Med Inform; 2023 Sep; 177():105173. PubMed ID: 37549499
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Evaluating ChatGPT-4's Diagnostic Accuracy: Impact of Visual Data Integration.
    Hirosawa T; Harada Y; Tokumasu K; Ito T; Suzuki T; Shimizu T
    JMIR Med Inform; 2024 Apr; 12():e55627. PubMed ID: 38592758
    [TBL] [Abstract][Full Text] [Related]  

  • 18. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.
    Gilson A; Safranek CW; Huang T; Socrates V; Chi L; Taylor RA; Chartash D
    JMIR Med Educ; 2023 Feb; 9():e45312. PubMed ID: 36753318
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Artificial Intelligence in Childcare: Assessing the Performance and Acceptance of ChatGPT Responses.
    Kaneda Y; Namba M; Kaneda U; Tanimoto T
    Cureus; 2023 Aug; 15(8):e44484. PubMed ID: 37791148
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Comparison of the Performance of GPT-3.5 and GPT-4 With That of Medical Students on the Written German Medical Licensing Examination: Observational Study.
    Meyer A; Riese J; Streichert T
    JMIR Med Educ; 2024 Feb; 10():e50965. PubMed ID: 38329802
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.