These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

130 related articles for article (PubMed ID: 37668790)

  • 1. Performance of ChatGPT in Israeli Hebrew OBGYN national residency examinations.
    Cohen A; Alter R; Lessans N; Meyer R; Brezinov Y; Levin G
    Arch Gynecol Obstet; 2023 Dec; 308(6):1797-1802. PubMed ID: 37668790
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Performance of ChatGPT in French language Parcours d'Accès Spécifique Santé test and in OBGYN.
    Guigue PA; Meyer R; Thivolle-Lioux G; Brezinov Y; Levin G
    Int J Gynaecol Obstet; 2024 Mar; 164(3):959-963. PubMed ID: 37655838
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Impact of Question Bank Use for In-Training Examination Preparation by OBGYN Residents - A Multicenter Study.
    Green I; Weaver A; Kircher S; Levy G; Michael Brady R; Flicker AB; Gala RB; Peterson J; Decesare J; Breitkopf D
    J Surg Educ; 2022; 79(3):775-782. PubMed ID: 35086789
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ChatGPT Is Equivalent to First-Year Plastic Surgery Residents: Evaluation of ChatGPT on the Plastic Surgery In-Service Examination.
    Humar P; Asaad M; Bengur FB; Nguyen V
    Aesthet Surg J; 2023 Nov; 43(12):NP1085-NP1089. PubMed ID: 37140001
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Can Artificial Intelligence Pass the American Board of Orthopaedic Surgery Examination? Orthopaedic Residents Versus ChatGPT.
    Lum ZC
    Clin Orthop Relat Res; 2023 Aug; 481(8):1623-1630. PubMed ID: 37220190
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Performance Comparison of ChatGPT-4 and Japanese Medical Residents in the General Medicine In-Training Examination: Comparison Study.
    Watari T; Takagi S; Sakaguchi K; Nishizaki Y; Shimizu T; Yamamoto Y; Tokuda Y
    JMIR Med Educ; 2023 Dec; 9():e52202. PubMed ID: 38055323
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.
    Mihalache A; Popovic MM; Muni RH
    JAMA Ophthalmol; 2023 Jun; 141(6):589-597. PubMed ID: 37103928
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
    Massey PA; Montgomery C; Zhang AS
    J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Performance of the Large Language Model ChatGPT on the National Nurse Examinations in Japan: Evaluation Study.
    Taira K; Itaya T; Hanada A
    JMIR Nurs; 2023 Jun; 6():e47305. PubMed ID: 37368470
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.
    Yu P; Fang C; Liu X; Fu W; Ling J; Yan Z; Jiang Y; Cao Z; Wu M; Chen Z; Zhu W; Zhang Y; Abudukeremu A; Wang Y; Liu X; Wang J
    JMIR Med Educ; 2024 Feb; 10():e48514. PubMed ID: 38335017
    [TBL] [Abstract][Full Text] [Related]  

  • 11. ChatGPT-4: An assessment of an upgraded artificial intelligence chatbot in the United States Medical Licensing Examination.
    Mihalache A; Huang RS; Popovic MM; Muni RH
    Med Teach; 2024 Mar; 46(3):366-372. PubMed ID: 37839017
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Exploring the use of ChatGPT in OBGYN: a bibliometric analysis of the first ChatGPT-related publications.
    Levin G; Brezinov Y; Meyer R
    Arch Gynecol Obstet; 2023 Dec; 308(6):1785-1789. PubMed ID: 37222839
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Assessment of Resident and AI Chatbot Performance on the University of Toronto Family Medicine Residency Progress Test: Comparative Study.
    Huang RS; Lu KJQ; Meaney C; Kemppainen J; Punnett A; Leung FH
    JMIR Med Educ; 2023 Sep; 9():e50514. PubMed ID: 37725411
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The performance of artificial intelligence language models in board-style dental knowledge assessment: A preliminary study on ChatGPT.
    Danesh A; Pazouki H; Danesh K; Danesh F; Danesh A
    J Am Dent Assoc; 2023 Nov; 154(11):970-974. PubMed ID: 37676187
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Progression of an Artificial Intelligence Chatbot (ChatGPT) for Pediatric Cardiology Educational Knowledge Assessment.
    Gritti MN; AlTurki H; Farid P; Morgan CT
    Pediatr Cardiol; 2024 Feb; 45(2):309-313. PubMed ID: 38170274
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Performance of ChatGPT-4 in answering questions from the Brazilian National Examination for Medical Degree Revalidation.
    Gobira M; Nakayama LF; Moreira R; Andrade E; Regatieri CVS; Belfort R
    Rev Assoc Med Bras (1992); 2023; 69(10):e20230848. PubMed ID: 37792871
    [TBL] [Abstract][Full Text] [Related]  

  • 17. It takes one to know one-Machine learning for identifying OBGYN abstracts written by ChatGPT.
    Levin G; Meyer R; Guigue PA; Brezinov Y
    Int J Gynaecol Obstet; 2024 Jun; 165(3):1257-1260. PubMed ID: 38234125
    [TBL] [Abstract][Full Text] [Related]  

  • 18. ChatGPT in Iranian medical licensing examination: evaluating the diagnostic accuracy and decision-making capabilities of an AI-based model.
    Ebrahimian M; Behnam B; Ghayebi N; Sobhrakhshankhah E
    BMJ Health Care Inform; 2023 Dec; 30(1):. PubMed ID: 38081765
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Could ChatGPT Pass the UK Radiology Fellowship Examinations?
    Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R
    Acad Radiol; 2024 May; 31(5):2178-2182. PubMed ID: 38160089
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Accuracy of ChatGPT on Medical Questions in the National Medical Licensing Examination in Japan: Evaluation Study.
    Yanagita Y; Yokokawa D; Uchida S; Tawara J; Ikusaka M
    JMIR Form Res; 2023 Oct; 7():e48023. PubMed ID: 37831496
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.