These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

589 related articles for article (PubMed ID: 37191485)

  • 1. Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations.
    Bhayana R; Krishna S; Bleakney RR
    Radiology; 2023 Jun; 307(5):e230582. PubMed ID: 37191485
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Performance of ChatGPT on the Brazilian Radiology and Diagnostic Imaging and Mammography Board Examinations.
    Almeida LC; Farina EMJM; Kuriki PEA; Abdala N; Kitamura FC
    Radiol Artif Intell; 2024 Jan; 6(1):e230103. PubMed ID: 38294325
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society.
    Toyama Y; Harigai A; Abe M; Nagano M; Kawabata M; Seki Y; Takase K
    Jpn J Radiol; 2024 Feb; 42(2):201-207. PubMed ID: 37792149
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Could ChatGPT Pass the UK Radiology Fellowship Examinations?
    Ariyaratne S; Jenko N; Mark Davies A; Iyengar KP; Botchu R
    Acad Radiol; 2024 May; 31(5):2178-2182. PubMed ID: 38160089
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Performance of ChatGPT on questions from the Brazilian College of Radiology annual resident evaluation test.
    Leitão CA; Salvador GLO; Rabelo LM; Escuissato DL
    Radiol Bras; 2024; 57():e20230083. PubMed ID: 38993961
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Evaluation of Reliability, Repeatability, Robustness, and Confidence of GPT-3.5 and GPT-4 on a Radiology Board-style Examination.
    Krishna S; Bhambra N; Bleakney R; Bhayana R
    Radiology; 2024 May; 311(2):e232715. PubMed ID: 38771184
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Evaluation of ChatGPT pathology knowledge using board-style questions.
    Geetha SD; Khan A; Khan A; Kannadath BS; Vitkovski T
    Am J Clin Pathol; 2024 Apr; 161(4):393-398. PubMed ID: 38041797
    [TBL] [Abstract][Full Text] [Related]  

  • 8. The performance of artificial intelligence language models in board-style dental knowledge assessment: A preliminary study on ChatGPT.
    Danesh A; Pazouki H; Danesh K; Danesh F; Danesh A
    J Am Dent Assoc; 2023 Nov; 154(11):970-974. PubMed ID: 37676187
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Evaluation of responses to cardiac imaging questions by the artificial intelligence large language model ChatGPT.
    Monroe CL; Abdelhafez YG; Atsina K; Aman E; Nardo L; Madani MH
    Clin Imaging; 2024 Aug; 112():110193. PubMed ID: 38820977
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Current applications and future potential of ChatGPT in radiology: A systematic review.
    Temperley HC; O'Sullivan NJ; Mac Curtain BM; Corr A; Meaney JF; Kelly ME; Brennan I
    J Med Imaging Radiat Oncol; 2024 Apr; 68(3):257-264. PubMed ID: 38243605
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Evaluating capabilities of large language models: Performance of GPT-4 on surgical knowledge assessments.
    Beaulieu-Jones BR; Berrigan MT; Shah S; Marwaha JS; Lai SL; Brat GA
    Surgery; 2024 Apr; 175(4):936-942. PubMed ID: 38246839
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Comparative Performance of ChatGPT and Bard in a Text-Based Radiology Knowledge Assessment.
    Patil NS; Huang RS; van der Pol CB; Larocque N
    Can Assoc Radiol J; 2024 May; 75(2):344-350. PubMed ID: 37578849
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Performance of Large Language Models on a Neurology Board-Style Examination.
    Schubert MC; Wick W; Venkataramani V
    JAMA Netw Open; 2023 Dec; 6(12):e2346721. PubMed ID: 38060223
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Performance of ChatGPT on Solving Orthopedic Board-Style Questions: A Comparative Analysis of ChatGPT 3.5 and ChatGPT 4.
    Kim SE; Lee JH; Choi BS; Han HS; Lee MC; Ro DH
    Clin Orthop Surg; 2024 Aug; 16(4):669-673. PubMed ID: 39092297
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis.
    Knoedler L; Alfertshofer M; Knoedler S; Hoch CC; Funk PF; Cotofana S; Maheta B; Frank K; Brébant V; Prantl L; Lamby P
    JMIR Med Educ; 2024 Jan; 10():e51148. PubMed ID: 38180782
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Artificial intelligence in orthopaedics: can Chat Generative Pre-trained Transformer (ChatGPT) pass Section 1 of the Fellowship of the Royal College of Surgeons (Trauma & Orthopaedics) examination?
    Cuthbert R; Simpson AI
    Postgrad Med J; 2023 Sep; 99(1176):1110-1114. PubMed ID: 37410674
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.
    Mihalache A; Popovic MM; Muni RH
    JAMA Ophthalmol; 2023 Jun; 141(6):589-597. PubMed ID: 37103928
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Comparison of the problem-solving performance of ChatGPT-3.5, ChatGPT-4, Bing Chat, and Bard for the Korean emergency medicine board examination question bank.
    Lee GU; Hong DY; Kim SY; Kim JW; Lee YH; Park SO; Lee KR
    Medicine (Baltimore); 2024 Mar; 103(9):e37325. PubMed ID: 38428889
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Artificial intelligence performance in clinical neurology queries: the ChatGPT model.
    Altunisik E; Firat YE; Cengiz EK; Comruk GB
    Neurol Res; 2024 May; 46(5):437-443. PubMed ID: 38522424
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.
    Ali R; Tang OY; Connolly ID; Zadnik Sullivan PL; Shin JH; Fridley JS; Asaad WF; Cielo D; Oyelese AA; Doberstein CE; Gokaslan ZL; Telfeian AE
    Neurosurgery; 2023 Dec; 93(6):1353-1365. PubMed ID: 37581444
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 30.