BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

266 related articles for article (PubMed ID: 38401238)

  • 41. Performance of ChatGPT and Bard in self-assessment questions for nephrology board renewal.
    Noda R; Izaki Y; Kitano F; Komatsu J; Ichikawa D; Shibagaki Y
    Clin Exp Nephrol; 2024 May; 28(5):465-469. PubMed ID: 38353783
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Can AI pass the written European Board Examination in Neurological Surgery? - Ethical and practical issues.
    Stengel FC; Stienen MN; Ivanov M; Gandía-González ML; Raffa G; Ganau M; Whitfield P; Motov S
    Brain Spine; 2024; 4():102765. PubMed ID: 38510593
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Performance of ChatGPT on Specialty Certificate Examination in Dermatology multiple-choice questions.
    Passby L; Jenko N; Wernham A
    Clin Exp Dermatol; 2024 Jun; 49(7):722-727. PubMed ID: 37264670
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Artificial intelligence model GPT4 narrowly fails simulated radiological protection exam.
    Roemer G; Li A; Mahmood U; Dauer L; Bellamy M
    J Radiol Prot; 2024 Jan; 44(1):. PubMed ID: 38232401
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations.
    Ali R; Tang OY; Connolly ID; Zadnik Sullivan PL; Shin JH; Fridley JS; Asaad WF; Cielo D; Oyelese AA; Doberstein CE; Gokaslan ZL; Telfeian AE
    Neurosurgery; 2023 Dec; 93(6):1353-1365. PubMed ID: 37581444
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Assessment of Pathology Domain-Specific Knowledge of ChatGPT and Comparison to Human Performance.
    Wang AY; Lin S; Tran C; Homer RJ; Wilsdon D; Walsh JC; Goebel EA; Sansano I; Sonawane S; Cockenpot V; Mukhopadhyay S; Taskin T; Zahra N; Cima L; Semerci O; Özamrak BG; Mishra P; Vennavalli NS; Chen PC; Cecchini MJ
    Arch Pathol Lab Med; 2024 Jan; ():. PubMed ID: 38244054
    [TBL] [Abstract][Full Text] [Related]  

  • 47. Evaluating ChatGPT Performance on the Orthopaedic In-Training Examination.
    Kung JE; Marshall C; Gauthier C; Gonzalez TA; Jackson JB
    JB JS Open Access; 2023; 8(3):. PubMed ID: 37693092
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
    Massey PA; Montgomery C; Zhang AS
    J Am Acad Orthop Surg; 2023 Dec; 31(23):1173-1179. PubMed ID: 37671415
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Quality of Answers of Generative Large Language Models vs Peer Patients for Interpreting Lab Test Results for Lay Patients: Evaluation Study.
    He Z; Bhasuran B; Jin Q; Tian S; Hanna K; Shavor C; Arguello LG; Murray P; Lu Z
    ArXiv; 2024 Jan; ():. PubMed ID: 38529075
    [TBL] [Abstract][Full Text] [Related]  

  • 50. ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology.
    Lewandowski M; Łukowicz P; Świetlik D; Barańska-Rybak W
    Clin Exp Dermatol; 2024 Jun; 49(7):686-691. PubMed ID: 37540015
    [TBL] [Abstract][Full Text] [Related]  

  • 51. Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.
    Mihalache A; Popovic MM; Muni RH
    JAMA Ophthalmol; 2023 Jun; 141(6):589-597. PubMed ID: 37103928
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Evaluation of Reliability, Repeatability, Robustness, and Confidence of GPT-3.5 and GPT-4 on a Radiology Board-style Examination.
    Krishna S; Bhambra N; Bleakney R; Bhayana R
    Radiology; 2024 May; 311(2):e232715. PubMed ID: 38771184
    [TBL] [Abstract][Full Text] [Related]  

  • 53. ChatGPT performance on radiation technologist and therapist entry to practice exams.
    Duggan R; Tsuruda KM
    J Med Imaging Radiat Sci; 2024 May; 55(4):101426. PubMed ID: 38797622
    [TBL] [Abstract][Full Text] [Related]  

  • 54. Evaluating the Artificial Intelligence Performance Growth in Ophthalmic Knowledge.
    Jiao C; Edupuganti NR; Patel PA; Bui T; Sheth V
    Cureus; 2023 Sep; 15(9):e45700. PubMed ID: 37868408
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Performance and exploration of ChatGPT in medical examination, records and education in Chinese: Pave the way for medical AI.
    Wang H; Wu W; Dou Z; He L; Yang L
    Int J Med Inform; 2023 Sep; 177():105173. PubMed ID: 37549499
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Leveraging Large Language Models (LLM) for the Plastic Surgery Resident Training: Do They Have a Role?
    Mohapatra DP; Thiruvoth FM; Tripathy S; Rajan S; Vathulya M; Lakshmi P; Singh VK; Haq AU
    Indian J Plast Surg; 2023 Oct; 56(5):413-420. PubMed ID: 38026769
    [No Abstract]   [Full Text] [Related]  

  • 57. The Performance of GPT-3.5, GPT-4, and Bard on the Japanese National Dentist Examination: A Comparison Study.
    Ohta K; Ohta S
    Cureus; 2023 Dec; 15(12):e50369. PubMed ID: 38213361
    [TBL] [Abstract][Full Text] [Related]  

  • 58. Assessing the accuracy and completeness of artificial intelligence language models in providing information on methotrexate use.
    Coskun BN; Yagiz B; Ocakoglu G; Dalkilic E; Pehlivan Y
    Rheumatol Int; 2024 Mar; 44(3):509-515. PubMed ID: 37747564
    [TBL] [Abstract][Full Text] [Related]  

  • 59. ChatGPT performance in the medical specialty exam: An observational study.
    Oztermeli AD; Oztermeli A
    Medicine (Baltimore); 2023 Aug; 102(32):e34673. PubMed ID: 37565917
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Performance of Generative Artificial Intelligence in Dental Licensing Examinations.
    Chau RCW; Thu KM; Yu OY; Hsung RT; Lo ECM; Lam WYH
    Int Dent J; 2024 Jun; 74(3):616-621. PubMed ID: 38242810
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 14.