These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
168 related articles for article (PubMed ID: 36112914)
1. Explainability does not improve biochemistry staff trust in artificial intelligence-based decision support. Lancaster Farrell CJ Ann Clin Biochem; 2022 Nov; 59(6):447-449. PubMed ID: 36112914 [TBL] [Abstract][Full Text] [Related]
2. Decision support or autonomous artificial intelligence? The case of wrong blood in tube errors. Farrell CL Clin Chem Lab Med; 2022 Nov; 60(12):1993-1997. PubMed ID: 34717051 [TBL] [Abstract][Full Text] [Related]
3. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey. Ploug T; Sundby A; Moeslund TB; Holm S J Med Internet Res; 2021 Dec; 23(12):e26611. PubMed ID: 34898454 [TBL] [Abstract][Full Text] [Related]
4. An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study. Fernandes GJ; Choi A; Schauer JM; Pfammatter AF; Spring BJ; Darwiche A; Alshurafa NI J Med Internet Res; 2023 Sep; 25():e42047. PubMed ID: 37672333 [TBL] [Abstract][Full Text] [Related]
5. Machine learning models outperform manual result review for the identification of wrong blood in tube errors in complete blood count results. Farrell CL; Giannoutsos J Int J Lab Hematol; 2022 Jun; 44(3):497-503. PubMed ID: 35274468 [TBL] [Abstract][Full Text] [Related]
6. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Diprose WK; Buist N; Hua N; Thurier Q; Shand G; Robinson R J Am Med Inform Assoc; 2020 Apr; 27(4):592-600. PubMed ID: 32106285 [TBL] [Abstract][Full Text] [Related]
7. A Clinical Decision Support System for Sleep Staging Tasks With Explanations From Artificial Intelligence: User-Centered Design and Evaluation Study. Hwang J; Lee T; Lee H; Byun S J Med Internet Res; 2022 Jan; 24(1):e28659. PubMed ID: 35044311 [TBL] [Abstract][Full Text] [Related]
8. Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based. McCoy LG; Brenna CTA; Chen SS; Vold K; Das S J Clin Epidemiol; 2022 Feb; 142():252-257. PubMed ID: 34748907 [TBL] [Abstract][Full Text] [Related]
9. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Meas M; Machlev R; Kose A; Tepljakov A; Loo L; Levron Y; Petlenkov E; Belikov J Sensors (Basel); 2022 Aug; 22(17):. PubMed ID: 36080795 [TBL] [Abstract][Full Text] [Related]
10. Trust criteria for artificial intelligence in health: normative and epistemic considerations. Kostick-Quenet K; Lang BH; Smith J; Hurley M; Blumenthal-Barby J J Med Ethics; 2024 Jul; 50(8):544-551. PubMed ID: 37979976 [TBL] [Abstract][Full Text] [Related]
11. Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task. Jin W; Fatehi M; Guo R; Hamarneh G Artif Intell Med; 2024 Feb; 148():102751. PubMed ID: 38325929 [TBL] [Abstract][Full Text] [Related]
12. The false hope of current approaches to explainable artificial intelligence in health care. Ghassemi M; Oakden-Rayner L; Beam AL Lancet Digit Health; 2021 Nov; 3(11):e745-e750. PubMed ID: 34711379 [TBL] [Abstract][Full Text] [Related]
14. The effect of machine learning explanations on user trust for automated diagnosis of COVID-19. Goel K; Sindhgatta R; Kalra S; Goel R; Mutreja P Comput Biol Med; 2022 Jul; 146():105587. PubMed ID: 35551007 [TBL] [Abstract][Full Text] [Related]
15. The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study. Woodcock C; Mittelstadt B; Busbridge D; Blank G J Med Internet Res; 2021 Nov; 23(11):e29386. PubMed ID: 34730544 [TBL] [Abstract][Full Text] [Related]
16. Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems. Zhang Z; Genc Y; Wang D; Ahsen ME; Fan X J Med Syst; 2021 May; 45(6):64. PubMed ID: 33948743 [TBL] [Abstract][Full Text] [Related]
17. Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems. Govea J; Gutierrez R; Villegas-Ch W Front Artif Intell; 2024; 7():1410790. PubMed ID: 39301478 [TBL] [Abstract][Full Text] [Related]
18. The Impact of Information Relevancy and Interactivity on Intensivists' Trust in a Machine Learning-Based Bacteremia Prediction System: Simulation Study. Katzburg O; Roimi M; Frenkel A; Ilan R; Bitan Y JMIR Hum Factors; 2024 Aug; 11():e56924. PubMed ID: 39092520 [TBL] [Abstract][Full Text] [Related]
19. Trust in AI: why we should be designing for APPROPRIATE reliance. Benda NC; Novak LL; Reale C; Ancker JS J Am Med Inform Assoc; 2021 Dec; 29(1):207-212. PubMed ID: 34725693 [TBL] [Abstract][Full Text] [Related]
20. Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries. van der Veer SN; Riste L; Cheraghi-Sohi S; Phipps DL; Tully MP; Bozentko K; Atwood S; Hubbard A; Wiper C; Oswald M; Peek N J Am Med Inform Assoc; 2021 Sep; 28(10):2128-2138. PubMed ID: 34333646 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]