These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

162 related articles for article (PubMed ID: 33074811)

  • 1. DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models.
    Cheng F; Ming Y; Qu H
    IEEE Trans Vis Comput Graph; 2021 Feb; 27(2):1438-1447. PubMed ID: 33074811
    [TBL] [Abstract][Full Text] [Related]  

  • 2. GANterfactual-Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning.
    Mertes S; Huber T; Weitz K; Heimerl A; André E
    Front Artif Intell; 2022; 5():825565. PubMed ID: 35464995
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Explaining the black-box smoothly-A counterfactual approach.
    Singla S; Eslami M; Pollack B; Wallace S; Batmanghelich K
    Med Image Anal; 2023 Feb; 84():102721. PubMed ID: 36571975
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Explainable artificial intelligence in forensics: Realistic explanations for number of contributor predictions of DNA profiles.
    Veldhuis MS; Ariëns S; Ypma RJF; Abeel T; Benschop CCG
    Forensic Sci Int Genet; 2022 Jan; 56():102632. PubMed ID: 34839075
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare.
    Barda AJ; Horvat CM; Hochheiser H
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):257. PubMed ID: 33032582
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making.
    Hoque MN; Mueller K
    IEEE Trans Vis Comput Graph; 2022 Dec; 28(12):4728-4740. PubMed ID: 34347601
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Explainable AI as evidence of fair decisions.
    Leben D
    Front Psychol; 2023; 14():1069426. PubMed ID: 36865358
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Style-transfer counterfactual explanations: An application to mortality prevention of ICU patients.
    Wang Z; Samsten I; Kougia V; Papapetrou P
    Artif Intell Med; 2023 Jan; 135():102457. PubMed ID: 36628793
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction.
    Stange S; Hassan T; Schröder F; Konkol J; Kopp S
    Front Artif Intell; 2022; 5():866920. PubMed ID: 35573901
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network.
    Matsui T; Taki M; Pham TQ; Chikazoe J; Jimura K
    Front Neuroinform; 2021; 15():802938. PubMed ID: 35369003
    [TBL] [Abstract][Full Text] [Related]  

  • 12. CX-ToM: Counterfactual explanations with theory-of-mind for enhancing human trust in image recognition models.
    Akula AR; Wang K; Liu C; Saba-Sadiya S; Lu H; Todorovic S; Chai J; Zhu SC
    iScience; 2022 Jan; 25(1):103581. PubMed ID: 35036861
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.
    Casacuberta D; Guersenzvaig A; Moyano-Fernández C
    AI Soc; 2022 Mar; ():1-15. PubMed ID: 35370366
    [TBL] [Abstract][Full Text] [Related]  

  • 14. SUBPLEX: A Visual Analytics Approach to Understand Local Model Explanations at the Subpopulation Level.
    Yuan J; Chan GY; Barr B; Overton K; Rees K; Nonato LG; Bertini E; Silva CT
    IEEE Comput Graph Appl; 2022; 42(6):24-36. PubMed ID: 37015716
    [TBL] [Abstract][Full Text] [Related]  

  • 15. How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains.
    Celar L; Byrne RMJ
    Mem Cognit; 2023 Oct; 51(7):1481-1496. PubMed ID: 36964302
    [TBL] [Abstract][Full Text] [Related]  

  • 16. From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks.
    Alfeo AL; Zippo AG; Catrambone V; Cimino MGCA; Toschi N; Valenza G
    Comput Methods Programs Biomed; 2023 Jun; 236():107550. PubMed ID: 37086584
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI.
    Lucieri A; Dengel A; Ahmed S
    Front Bioinform; 2023; 3():1194993. PubMed ID: 37484865
    [TBL] [Abstract][Full Text] [Related]  

  • 18. ChemInformatics Model Explorer (CIME): exploratory analysis of chemical model explanations.
    Humer C; Heberle H; Montanari F; Wolf T; Huber F; Henderson R; Heinrich J; Streit M
    J Cheminform; 2022 Apr; 14(1):21. PubMed ID: 35379315
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Explaining Black Box Drug Target Prediction Through Model Agnostic Counterfactual Samples.
    Nguyen TM; Quinn TP; Nguyen T; Tran T
    IEEE/ACM Trans Comput Biol Bioinform; 2023; 20(2):1020-1029. PubMed ID: 35820003
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Machine Learning and Explainable Artificial Intelligence Using Counterfactual Explanations for Evaluating Posture Parameters.
    Dindorf C; Ludwig O; Simon S; Becker S; Fröhlich M
    Bioengineering (Basel); 2023 Apr; 10(5):. PubMed ID: 37237581
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.