These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

172 related articles for article (PubMed ID: 36112914)

  • 1. Explainability does not improve biochemistry staff trust in artificial intelligence-based decision support.
    Lancaster Farrell CJ
    Ann Clin Biochem; 2022 Nov; 59(6):447-449. PubMed ID: 36112914
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Decision support or autonomous artificial intelligence? The case of wrong blood in tube errors.
    Farrell CL
    Clin Chem Lab Med; 2022 Nov; 60(12):1993-1997. PubMed ID: 34717051
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.
    Ploug T; Sundby A; Moeslund TB; Holm S
    J Med Internet Res; 2021 Dec; 23(12):e26611. PubMed ID: 34898454
    [TBL] [Abstract][Full Text] [Related]  

  • 4. An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.
    Fernandes GJ; Choi A; Schauer JM; Pfammatter AF; Spring BJ; Darwiche A; Alshurafa NI
    J Med Internet Res; 2023 Sep; 25():e42047. PubMed ID: 37672333
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Machine learning models outperform manual result review for the identification of wrong blood in tube errors in complete blood count results.
    Farrell CL; Giannoutsos J
    Int J Lab Hematol; 2022 Jun; 44(3):497-503. PubMed ID: 35274468
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.
    Diprose WK; Buist N; Hua N; Thurier Q; Shand G; Robinson R
    J Am Med Inform Assoc; 2020 Apr; 27(4):592-600. PubMed ID: 32106285
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Clinical Decision Support System for Sleep Staging Tasks With Explanations From Artificial Intelligence: User-Centered Design and Evaluation Study.
    Hwang J; Lee T; Lee H; Byun S
    J Med Internet Res; 2022 Jan; 24(1):e28659. PubMed ID: 35044311
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.
    Amann J; Blasimme A; Vayena E; Frey D; Madai VI;
    BMC Med Inform Decis Mak; 2020 Nov; 20(1):310. PubMed ID: 33256715
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based.
    McCoy LG; Brenna CTA; Chen SS; Vold K; Das S
    J Clin Epidemiol; 2022 Feb; 142():252-257. PubMed ID: 34748907
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI).
    Meas M; Machlev R; Kose A; Tepljakov A; Loo L; Levron Y; Petlenkov E; Belikov J
    Sensors (Basel); 2022 Aug; 22(17):. PubMed ID: 36080795
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task.
    Jin W; Fatehi M; Guo R; Hamarneh G
    Artif Intell Med; 2024 Feb; 148():102751. PubMed ID: 38325929
    [TBL] [Abstract][Full Text] [Related]  

  • 12. The false hope of current approaches to explainable artificial intelligence in health care.
    Ghassemi M; Oakden-Rayner L; Beam AL
    Lancet Digit Health; 2021 Nov; 3(11):e745-e750. PubMed ID: 34711379
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes.
    Chari S; Acharya P; Gruen DM; Zhang O; Eyigoz EK; Ghalwash M; Seneviratne O; Saiz FS; Meyer P; Chakraborty P; McGuinness DL
    Artif Intell Med; 2023 Mar; 137():102498. PubMed ID: 36868690
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.
    Goel K; Sindhgatta R; Kalra S; Goel R; Mutreja P
    Comput Biol Med; 2022 Jul; 146():105587. PubMed ID: 35551007
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study.
    Woodcock C; Mittelstadt B; Busbridge D; Blank G
    J Med Internet Res; 2021 Nov; 23(11):e29386. PubMed ID: 34730544
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems.
    Zhang Z; Genc Y; Wang D; Ahsen ME; Fan X
    J Med Syst; 2021 May; 45(6):64. PubMed ID: 33948743
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Trust in AI: why we should be designing for APPROPRIATE reliance.
    Benda NC; Novak LL; Reale C; Ancker JS
    J Am Med Inform Assoc; 2021 Dec; 29(1):207-212. PubMed ID: 34725693
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Trading off accuracy and explainability in AI decision-making: findings from 2 citizens' juries.
    van der Veer SN; Riste L; Cheraghi-Sohi S; Phipps DL; Tully MP; Bozentko K; Atwood S; Hubbard A; Wiper C; Oswald M; Peek N
    J Am Med Inform Assoc; 2021 Sep; 28(10):2128-2138. PubMed ID: 34333646
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated.
    Rojas JC; Teran M; Umscheid CA
    Crit Care Clin; 2023 Oct; 39(4):769-782. PubMed ID: 37704339
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Explainable Artificial Intelligence for Predictive Modeling in Healthcare.
    Yang CC
    J Healthc Inform Res; 2022 Jun; 6(2):228-239. PubMed ID: 35194568
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.