BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

230 related articles for article (PubMed ID: 37209880)

  • 1. ChatGPT Performance on the American Urological Association Self-assessment Study Program and the Potential Influence of Artificial Intelligence in Urologic Training.
    Deebel NA; Terlecki R
    Urology; 2023 Jul; 177():29-33. PubMed ID: 37209880
    [TBL] [Abstract][Full Text] [Related]  

  • 2. New Artificial Intelligence ChatGPT Performs Poorly on the 2022 Self-assessment Study Program for Urology.
    Huynh LM; Bonebrake BT; Schultis K; Quach A; Deibert CM
    Urol Pract; 2023 Jul; 10(4):409-415. PubMed ID: 37276372
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.
    Mihalache A; Popovic MM; Muni RH
    JAMA Ophthalmol; 2023 Jun; 141(6):589-597. PubMed ID: 37103928
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Can Artificial Intelligence Pass the American Board of Orthopaedic Surgery Examination? Orthopaedic Residents Versus ChatGPT.
    Lum ZC
    Clin Orthop Relat Res; 2023 Aug; 481(8):1623-1630. PubMed ID: 37220190
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Evaluating capabilities of large language models: Performance of GPT-4 on surgical knowledge assessments.
    Beaulieu-Jones BR; Berrigan MT; Shah S; Marwaha JS; Lai SL; Brat GA
    Surgery; 2024 Apr; 175(4):936-942. PubMed ID: 38246839
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Evaluating the performance of ChatGPT in answering questions related to urolithiasis.
    Cakir H; Caglar U; Yildiz O; Meric A; Ayranci A; Ozgor F
    Int Urol Nephrol; 2024 Jan; 56(1):17-21. PubMed ID: 37658948
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Performance of ChatGPT on American Board of Surgery In-Training Examination Preparation Questions.
    Tran CG; Chang J; Sherman SK; De Andrade JP
    J Surg Res; 2024 May; 299():329-335. PubMed ID: 38788470
    [TBL] [Abstract][Full Text] [Related]  

  • 8. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.
    Gilson A; Safranek CW; Huang T; Socrates V; Chi L; Taylor RA; Chartash D
    JMIR Med Educ; 2023 Feb; 9():e45312. PubMed ID: 36753318
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Conformity of ChatGPT recommendations with the AUA/SUFU guideline on postprostatectomy urinary incontinence.
    Pinto VBP; de Azevedo MF; Wroclawski ML; Gentile G; Jesus VLM; de Bessa Junior J; Nahas WC; Sacomani CAR; Sandhu JS; Gomes CM
    Neurourol Urodyn; 2024 Apr; 43(4):935-941. PubMed ID: 38451040
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Could ChatGPT Pass the UK Radiology Fellowship Examinations?
    Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R
    Acad Radiol; 2024 May; 31(5):2178-2182. PubMed ID: 38160089
    [TBL] [Abstract][Full Text] [Related]  

  • 11. How does artificial intelligence master urological board examinations? A comparative analysis of different Large Language Models' accuracy and reliability in the 2022 In-Service Assessment of the European Board of Urology.
    Kollitsch L; Eredics K; Marszalek M; Rauchenwald M; Brookman-May SD; Burger M; Körner-Riffard K; May M
    World J Urol; 2024 Jan; 42(1):20. PubMed ID: 38197996
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Comprehensive analysis of the performance of GPT-3.5 and GPT-4 on the American Urological Association self-assessment study program exams from 2012-2023.
    Sherazi A; Canes D
    Can Urol Assoc J; 2023 Dec; ():. PubMed ID: 38381942
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance.
    Şahin B; Emre Genç Y; Doğan K; Emre Şener T; Şekerci ÇA; Tanıdır Y; Yücel S; Tarcan T; Çam HK
    J Endourol; 2024 May; ():. PubMed ID: 38815140
    [No Abstract]   [Full Text] [Related]  

  • 14. Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank.
    Lee GU; Hong DY; Kim SY; Kim JW; Lee YH; Park SO; Lee KR
    Medicine (Baltimore); 2024 Mar; 103(9):e37325. PubMed ID: 38428889
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The performance of artificial intelligence language models in board-style dental knowledge assessment: A preliminary study on ChatGPT.
    Danesh A; Pazouki H; Danesh K; Danesh F; Danesh A
    J Am Dent Assoc; 2023 Nov; 154(11):970-974. PubMed ID: 37676187
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum.
    Das D; Kumar N; Longjam LA; Sinha R; Deb Roy A; Mondal H; Gupta P
    Cureus; 2023 Mar; 15(3):e36034. PubMed ID: 37056538
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Generative Artificial Intelligence Performs at a Second-Year Orthopedic Resident Level.
    Lum ZC; Collins DP; Dennison S; Guntupalli L; Choudhary S; Saiz AM; Randall RL
    Cureus; 2024 Mar; 16(3):e56104. PubMed ID: 38618358
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Probing artificial intelligence in neurosurgical training: ChatGPT takes a neurosurgical residents written exam.
    Bartoli A; May AT; Al-Awadhi A; Schaller K
    Brain Spine; 2024; 4():102715. PubMed ID: 38163001
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
    Massey PA; Montgomery C; Zhang AS
    J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study.
    Haddad F; Saade JS
    JMIR Med Educ; 2024 Jan; 10():e50842. PubMed ID: 38236632
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 12.