These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Evaluating ChatGPT ability to answer urinary tract Infection-Related questions. Author: Cakir H, Caglar U, Sekkeli S, Zerdali E, Sarilar O, Yildiz O, Ozgor F. Journal: Infect Dis Now; 2024 Jun; 54(4):104884. PubMed ID: 38460761. Abstract: INTRODUCTION: For the first time, the accuracy and proficiency of ChatGPT answers on urogenital tract infection (UTIs) were evaluated. METHODS: The study aimed to create two lists of questions: frequently asked questions (FAQs, public-based inquiries) on relevant topics, and questions based on guideline information (guideline-based inquiries). ChatGPT responses to FAQs and scientific questions were scored by two urologists and an infectious disease specialist. Quality and reliability of all ChatGPT answers were checked using the Global Quality Score (GQS). The reproducibility of ChatGPT answers was analyzed by asking each question twice. RESULTS: All in all, 96.2 % of FAQs (75/78 inquiries) related to UTIs were correctly and adequately answered by ChatGPT, and scored GQS 5. None of the ChatGPT answers were classified as GQS 2 and GQS 1. Moreover, FAQs about cystitis, urethritis, and epididymo-orchitis were answered by ChatGPT with 100 % accuracy (GQS 5). ChatGPT answers for EAU urological infections guidelines showed that 61 (89.7 %), 5 (7.4 %), and 2 (2.9 %) ChatGPT responses were scored GQS 5, GQS 4, and GQS 3, respectively. None of the ChatGPT responses for EAU urological infections guidelines were categorized as GQS 2 and GQS 1. Comparison of mean GQS values of ChatGPT answers for FAQs and EAU urological guideline questions showed that ChatGPT was similarly able to respond to both question groups (p = 0.168). The ChatGPT response reproducibility rate was highest for the FAQ subgroups of cystitis, urethritis, and epididymo-orchitis (100 % for each subgroup). CONCLUSION: The present study showed that ChatGPT gave accurate and satisfactory answers for both public-based inquiries, and EAU urological infection guideline-based questions. Reproducibility of ChatGPT answers exceeded 90% for both FAQs and scientific questions.[Abstract] [Full Text] [Related] [New Search]