These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Fact Check: Assessing the Response of ChatGPT to Alzheimer's Disease Statements with Varying Degrees of Misinformation.
    Author: Huang SS, Song Q, Beiting KJ, Duggan MC, Hines K, Murff H, Leung V, Powers J, Harvey TS, Malin B, Yin Z.
    Journal: medRxiv; 2023 Sep 07; ():. PubMed ID: 37745352.
    Abstract:
    BACKGROUND: There are many myths regarding Alzheimer's disease (AD) that have been circulated on the Internet, each exhibiting varying degrees of accuracy, inaccuracy, and misinformation. Large language models such as ChatGPT, may be a useful tool to help assess these myths for veracity and inaccuracy. However, they can induce misinformation as well. The objective of this study is to assess ChatGPT's ability to identify and address AD myths with reliable information. METHODS: We conducted a cross-sectional study of clinicians' evaluation of ChatGPT (GPT 4.0)'s responses to 20 selected AD myths. We prompted ChatGPT to express its opinion on each myth and then requested it to rephrase its explanation using a simplified language that could be more readily understood by individuals with a middle school education. We implemented a survey using Redcap to determine the degree to which clinicians agreed with the accuracy of each ChatGPT's explanation and the degree to which the simplified rewriting was readable and retained the message of the original. We also collected their explanation on any disagreement with ChatGPT's responses. We used five Likert-type scale with a score ranging from -2 to 2 to quantify clinicians' agreement in each aspect of the evaluation. RESULTS: The clinicians (n=11) were generally satisfied with ChatGPT's explanations, with a mean (SD) score of 1.0(±0.3) across the 20 myths. While ChatGPT correctly identified that all the 20 myths were inaccurate, some clinicians disagreed with its explanations on 7 of the myths.Overall, 9 of the 11 professionals either agreed or strongly agreed that ChatGPT has the potential to provide meaningful explanations of certain myths. CONCLUSIONS: The majority of surveyed healthcare professionals acknowledged the potential value of ChatGPT in mitigating AD misinformation. However, the need for more refined and detailed explanations of the disease's mechanisms and treatments was highlighted.
    [Abstract] [Full Text] [Related] [New Search]