These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

108 related articles for article (PubMed ID: 34347601)

  • 1. Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making.
    Hoque MN; Mueller K
    IEEE Trans Vis Comput Graph; 2022 Dec; 28(12):4728-4740. PubMed ID: 34347601
    [TBL] [Abstract][Full Text] [Related]  

  • 2. DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models.
    Cheng F; Ming Y; Qu H
    IEEE Trans Vis Comput Graph; 2021 Feb; 27(2):1438-1447. PubMed ID: 33074811
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise.
    Hegdé J; Bart E
    Front Neurosci; 2018; 12():670. PubMed ID: 30369862
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ChemInformatics Model Explorer (CIME): exploratory analysis of chemical model explanations.
    Humer C; Heberle H; Montanari F; Wolf T; Huber F; Henderson R; Heinrich J; Streit M
    J Cheminform; 2022 Apr; 14(1):21. PubMed ID: 35379315
    [TBL] [Abstract][Full Text] [Related]  

  • 5. SUBPLEX: A Visual Analytics Approach to Understand Local Model Explanations at the Subpopulation Level.
    Yuan J; Chan GY; Barr B; Overton K; Rees K; Nonato LG; Bertini E; Silva CT
    IEEE Comput Graph Appl; 2022; 42(6):24-36. PubMed ID: 37015716
    [TBL] [Abstract][Full Text] [Related]  

  • 6. RuleMatrix: Visualizing and Understanding Classifiers with Rules.
    Ming Y; Qu H; Bertini E
    IEEE Trans Vis Comput Graph; 2018 Aug; ():. PubMed ID: 30130210
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Explaining the black-box smoothly-A counterfactual approach.
    Singla S; Eslami M; Pollack B; Wallace S; Batmanghelich K
    Med Image Anal; 2023 Feb; 84():102721. PubMed ID: 36571975
    [TBL] [Abstract][Full Text] [Related]  

  • 8. PIP: Pictorial Interpretable Prototype Learning for Time Series Classification.
    Ghods A; Cook DJ
    IEEE Comput Intell Mag; 2022 Feb; 17(1):34-45. PubMed ID: 35822085
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare.
    Barda AJ; Horvat CM; Hochheiser H
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):257. PubMed ID: 33032582
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction.
    Stange S; Hassan T; Schröder F; Konkol J; Kopp S
    Front Artif Intell; 2022; 5():866920. PubMed ID: 35573901
    [TBL] [Abstract][Full Text] [Related]  

  • 11. The grammar of interactive explanatory model analysis.
    Baniecki H; Parzych D; Biecek P
    Data Min Knowl Discov; 2023 Feb; ():1-37. PubMed ID: 36818741
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 13. VisExPreS: A Visual Interactive Toolkit for User-Driven Evaluations of Embeddings.
    Ghosh A; Nashaat M; Miller J; Quader S
    IEEE Trans Vis Comput Graph; 2022 Jul; 28(7):2791-2807. PubMed ID: 33211658
    [TBL] [Abstract][Full Text] [Related]  

  • 14. User-Centered Design of A Novel Risk Prediction Behavior Change Tool Augmented With an Artificial Intelligence Engine (MyDiabetesIQ): A Sociotechnical Systems Approach.
    Shields C; Cunningham SG; Wake DJ; Fioratou E; Brodie D; Philip S; Conway NT
    JMIR Hum Factors; 2022 Feb; 9(1):e29973. PubMed ID: 35133280
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
    Rudin C
    Nat Mach Intell; 2019 May; 1(5):206-215. PubMed ID: 35603010
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Visual Analysis of Discrimination in Machine Learning.
    Wang Q; Xu Z; Chen Z; Wang Y; Liu S; Qu H
    IEEE Trans Vis Comput Graph; 2021 Feb; 27(2):1470-1480. PubMed ID: 33048751
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Ranking Rule-Based Automatic Explanations for Machine Learning Predictions on Asthma Hospital Encounters in Patients With Asthma: Retrospective Cohort Study.
    Zhang X; Luo G
    JMIR Med Inform; 2021 Aug; 9(8):e28287. PubMed ID: 34383673
    [TBL] [Abstract][Full Text] [Related]  

  • 18. SMILE: systems metabolomics using interpretable learning and evolution.
    Sha C; Cuperlovic-Culf M; Hu T
    BMC Bioinformatics; 2021 May; 22(1):284. PubMed ID: 34049495
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Transparency as design publicity: explaining and justifying inscrutable algorithms.
    Loi M; Ferrario A; Viganò E
    Ethics Inf Technol; 2021; 23(3):253-263. PubMed ID: 34867077
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Towards human-computer synergetic analysis of large-scale biological data.
    Singh R; Yang H; Dalziel B; Asarnow D; Murad W; Foote D; Gormley M; Stillman J; Fisher S
    BMC Bioinformatics; 2013; 14 Suppl 14(Suppl 14):S10. PubMed ID: 24267485
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.