BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

277 related articles for article (PubMed ID: 33008388)

  • 1. Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.
    Hatwell J; Gaber MM; Atif Azad RM
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):250. PubMed ID: 33008388
    [TBL] [Abstract][Full Text] [Related]  

  • 2. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.
    Lucieri A; Bajwa MN; Braun SA; Malik MI; Dengel A; Ahmed S
    Comput Methods Programs Biomed; 2022 Mar; 215():106620. PubMed ID: 35033756
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Explainable AI for Bioinformatics: Methods, Tools and Applications.
    Karim MR; Islam T; Shajalal M; Beyan O; Lange C; Cochez M; Rebholz-Schuhmann D; Decker S
    Brief Bioinform; 2023 Sep; 24(5):. PubMed ID: 37478371
    [TBL] [Abstract][Full Text] [Related]  

  • 4. IHCP: interpretable hepatitis C prediction system based on black-box machine learning models.
    Fan Y; Lu X; Sun G
    BMC Bioinformatics; 2023 Sep; 24(1):333. PubMed ID: 37674125
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Explainable AI: Machine Learning Interpretation in Blackcurrant Powders.
    Przybył K
    Sensors (Basel); 2024 May; 24(10):. PubMed ID: 38794052
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A comparative analysis of multi-level computer-assisted decision making systems for traumatic injuries.
    Ji SY; Smith R; Huynh T; Najarian K
    BMC Med Inform Decis Mak; 2009 Jan; 9():2. PubMed ID: 19144188
    [TBL] [Abstract][Full Text] [Related]  

  • 7. An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients.
    Peng J; Zou K; Zhou M; Teng Y; Zhu X; Zhang F; Xu J
    J Med Syst; 2021 Apr; 45(5):61. PubMed ID: 33847850
    [TBL] [Abstract][Full Text] [Related]  

  • 8. An innovative artificial intelligence-based method to compress complex models into explainable, model-agnostic and reduced decision support systems with application to healthcare (NEAR).
    Kassem K; Sperti M; Cavallo A; Vergani AM; Fassino D; Moz M; Liscio A; Banali R; Dahlweid M; Benetti L; Bruno F; Gallone G; De Filippo O; Iannaccone M; D'Ascenzo F; De Ferrari GM; Morbiducci U; Della Valle E; Deriu MA
    Artif Intell Med; 2024 May; 151():102841. PubMed ID: 38658130
    [TBL] [Abstract][Full Text] [Related]  

  • 9. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.
    Ali S; Akhlaq F; Imran AS; Kastrati Z; Daudpota SM; Moosa M
    Comput Biol Med; 2023 Nov; 166():107555. PubMed ID: 37806061
    [TBL] [Abstract][Full Text] [Related]  

  • 10. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging.
    Hadjiiski L; Cha K; Chan HP; Drukker K; Morra L; Näppi JJ; Sahiner B; Yoshida H; Chen Q; Deserno TM; Greenspan H; Huisman H; Huo Z; Mazurchuk R; Petrick N; Regge D; Samala R; Summers RM; Suzuki K; Tourassi G; Vergara D; Armato SG
    Med Phys; 2023 Feb; 50(2):e1-e24. PubMed ID: 36565447
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.
    Chen H; Cohen E; Wilson D; Alfred M
    JMIR Hum Factors; 2024 Jan; 11():e53378. PubMed ID: 38271086
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI).
    Meas M; Machlev R; Kose A; Tepljakov A; Loo L; Levron Y; Petlenkov E; Belikov J
    Sensors (Basel); 2022 Aug; 22(17):. PubMed ID: 36080795
    [TBL] [Abstract][Full Text] [Related]  

  • 13. To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods.
    Amparore E; Perotti A; Bajardi P
    PeerJ Comput Sci; 2021; 7():e479. PubMed ID: 33977131
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.
    Nambiar A; S H; S S
    Front Artif Intell; 2023; 6():1272506. PubMed ID: 38111787
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Survey of Explainable AI Techniques in Healthcare.
    Chaddad A; Peng J; Xu J; Bouridane A
    Sensors (Basel); 2023 Jan; 23(2):. PubMed ID: 36679430
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).
    Loh HW; Ooi CP; Seoni S; Barua PD; Molinari F; Acharya UR
    Comput Methods Programs Biomed; 2022 Nov; 226():107161. PubMed ID: 36228495
    [TBL] [Abstract][Full Text] [Related]  

  • 17. From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks.
    Alfeo AL; Zippo AG; Catrambone V; Cimino MGCA; Toschi N; Valenza G
    Comput Methods Programs Biomed; 2023 Jun; 236():107550. PubMed ID: 37086584
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Interpretable heartbeat classification using local model-agnostic explanations on ECGs.
    Neves I; Folgado D; Santos S; Barandas M; Campagner A; Ronzio L; Cabitza F; Gamboa H
    Comput Biol Med; 2021 Jun; 133():104393. PubMed ID: 33915362
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.
    Lamba K; Rani S
    J Neurosci Methods; 2024 May; 408():110159. PubMed ID: 38723868
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.
    Petch J; Di S; Nelson W
    Can J Cardiol; 2022 Feb; 38(2):204-213. PubMed ID: 34534619
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 14.