230 related articles for article (PubMed ID: 38640472)
1. A Multidisciplinary Assessment of ChatGPT's Knowledge of Amyloidosis: Observational Study.
King RC; Samaan JS; Yeo YH; Peng Y; Kunkel DC; Habib AA; Ghashghaei R
JMIR Cardio; 2024 Apr; 8():e53421. PubMed ID: 38640472
[TBL] [Abstract][Full Text] [Related]
2. Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.
Yu P; Fang C; Liu X; Fu W; Ling J; Yan Z; Jiang Y; Cao Z; Wu M; Chen Z; Zhu W; Zhang Y; Abudukeremu A; Wang Y; Liu X; Wang J
JMIR Med Educ; 2024 Feb; 10():e48514. PubMed ID: 38335017
[TBL] [Abstract][Full Text] [Related]
3. ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice.
Riedel M; Kaefinger K; Stuehrenberg A; Ritter V; Amann N; Graf A; Recker F; Klein E; Kiechle M; Riedel F; Meyer B
Front Med (Lausanne); 2023; 10():1296615. PubMed ID: 38155661
[TBL] [Abstract][Full Text] [Related]
4. Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery.
Samaan JS; Yeo YH; Rajeev N; Hawley L; Abel S; Ng WH; Srinivasan N; Park J; Burch M; Watson R; Liran O; Samakar K
Obes Surg; 2023 Jun; 33(6):1790-1796. PubMed ID: 37106269
[TBL] [Abstract][Full Text] [Related]
5. Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer?
Kuşcu O; Pamuk AE; Sütay Süslü N; Hosal S
Front Oncol; 2023; 13():1256459. PubMed ID: 38107064
[TBL] [Abstract][Full Text] [Related]
6. Assessing ChatGPT's ability to answer questions pertaining to erectile dysfunction: can our patients trust it?
Razdan S; Siegal AR; Brewer Y; Sljivich M; Valenzuela RJ
Int J Impot Res; 2023 Nov; ():. PubMed ID: 37985815
[TBL] [Abstract][Full Text] [Related]
7. How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.
Gilson A; Safranek CW; Huang T; Socrates V; Chi L; Taylor RA; Chartash D
JMIR Med Educ; 2023 Feb; 9():e45312. PubMed ID: 36753318
[TBL] [Abstract][Full Text] [Related]
8. Generative artificial intelligence chatbots may provide appropriate informational responses to common vascular surgery questions by patients.
Chervonski E; Harish KB; Rockman CB; Sadek M; Teter KA; Jacobowitz GR; Berland TL; Lohr J; Moore C; Maldonado TS
Vascular; 2024 Mar; ():17085381241240550. PubMed ID: 38500300
[TBL] [Abstract][Full Text] [Related]
9. Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam.
Su MC; Lin LE; Lin LH; Chen YC
Int J Nurs Stud; 2024 May; 153():104717. PubMed ID: 38401366
[TBL] [Abstract][Full Text] [Related]
10. Evaluating ChatGPT-3.5 and ChatGPT-4.0 Responses on Hyperlipidemia for Patient Education.
Lee TJ; Rao AK; Campbell DJ; Radfar N; Dayal M; Khrais A
Cureus; 2024 May; 16(5):e61067. PubMed ID: 38803402
[TBL] [Abstract][Full Text] [Related]
11. Evaluating the Accuracy of ChatGPT and Google BARD in Fielding Oculoplastic Patient Queries: A Comparative Study on Artificial versus Human Intelligence.
Al-Sharif EM; Penteado RC; Dib El Jalbout N; Topilow NJ; Shoji MK; Kikkawa DO; Liu CY; Korn BS
Ophthalmic Plast Reconstr Surg; 2024 May-Jun 01; 40(3):303-311. PubMed ID: 38215452
[TBL] [Abstract][Full Text] [Related]
12. Assessing the Accuracy of Generative Conversational Artificial Intelligence in Debunking Sleep Health Myths: Mixed Methods Comparative Study With Expert Analysis.
Bragazzi NL; Garbarino S
JMIR Form Res; 2024 Apr; 8():e55762. PubMed ID: 38501898
[TBL] [Abstract][Full Text] [Related]
13. ChatGPT's Ability to Assess Quality and Readability of Online Medical Information: Evidence From a Cross-Sectional Study.
Golan R; Ripps SJ; Reddy R; Loloi J; Bernstein AP; Connelly ZM; Golan NS; Ramasamy R
Cureus; 2023 Jul; 15(7):e42214. PubMed ID: 37484787
[TBL] [Abstract][Full Text] [Related]
14. How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses.
Gül Ş; Erdemir İ; Hanci V; Aydoğmuş E; Erkoç YS
Medicine (Baltimore); 2024 May; 103(18):e38009. PubMed ID: 38701313
[TBL] [Abstract][Full Text] [Related]
15. Enhancing Patient Communication With Chat-GPT in Radiology: Evaluating the Efficacy and Readability of Answers to Common Imaging-Related Questions.
Gordon EB; Towbin AJ; Wingrove P; Shafique U; Haas B; Kitts AB; Feldman J; Furlan A
J Am Coll Radiol; 2024 Feb; 21(2):353-359. PubMed ID: 37863153
[TBL] [Abstract][Full Text] [Related]
16. Pure Wisdom or Potemkin Villages? A Comparison of ChatGPT 3.5 and ChatGPT 4 on USMLE Step 3 Style Questions: Quantitative Analysis.
Knoedler L; Alfertshofer M; Knoedler S; Hoch CC; Funk PF; Cotofana S; Maheta B; Frank K; Brébant V; Prantl L; Lamby P
JMIR Med Educ; 2024 Jan; 10():e51148. PubMed ID: 38180782
[TBL] [Abstract][Full Text] [Related]
17. Quality of information and appropriateness of ChatGPT outputs for urology patients.
Cocci A; Pezzoli M; Lo Re M; Russo GI; Asmundo MG; Fode M; Cacciamani G; Cimino S; Minervini A; Durukan E
Prostate Cancer Prostatic Dis; 2024 Mar; 27(1):103-108. PubMed ID: 37516804
[TBL] [Abstract][Full Text] [Related]
18. Assessment of ChatGPT's performance on neurology written board examination questions.
Chen TC; Multala E; Kearns P; Delashaw J; Dumont A; Maraganore D; Wang A
BMJ Neurol Open; 2023; 5(2):e000530. PubMed ID: 37936648
[TBL] [Abstract][Full Text] [Related]
19. Information Quality and Readability: ChatGPT's Responses to the Most Common Questions About Spinal Cord Injury.
Temel MH; Erden Y; Bağcıer F
World Neurosurg; 2024 Jan; 181():e1138-e1144. PubMed ID: 38000671
[TBL] [Abstract][Full Text] [Related]
20. Evaluating the performance of the language model ChatGPT in responding to common questions of people with epilepsy.
Wu Y; Zhang Z; Dong X; Hong S; Hu Y; Liang P; Li L; Zou B; Wu X; Wang D; Chen H; Qiu H; Tang H; Kang K; Li Q; Zhai X
Epilepsy Behav; 2024 Feb; 151():109645. PubMed ID: 38244419
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]