These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

209 related articles for article (PubMed ID: 36628785)

  • 1. Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions.
    Parimbelli E; Buonocore TM; Nicora G; Michalowski W; Wilk S; Bellazzi R
    Artif Intell Med; 2023 Jan; 135():102471. PubMed ID: 36628785
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Explainable AI: Machine Learning Interpretation in Blackcurrant Powders.
    PrzybyƂ K
    Sensors (Basel); 2024 May; 24(10):. PubMed ID: 38794052
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.
    Nambiar A; S H; S S
    Front Artif Intell; 2023; 6():1272506. PubMed ID: 38111787
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Modeling strength characteristics of basalt fiber reinforced concrete using multiple explainable machine learning with a graphical user interface.
    Kulasooriya WKVJB; Ranasinghe RSS; Perera US; Thisovithan P; Ekanayake IU; Meddage DPP
    Sci Rep; 2023 Aug; 13(1):13138. PubMed ID: 37573410
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Explainable AI for Bioinformatics: Methods, Tools and Applications.
    Karim MR; Islam T; Shajalal M; Beyan O; Lange C; Cochez M; Rebholz-Schuhmann D; Decker S
    Brief Bioinform; 2023 Sep; 24(5):. PubMed ID: 37478371
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Interpretable heartbeat classification using local model-agnostic explanations on ECGs.
    Neves I; Folgado D; Santos S; Barandas M; Campagner A; Ronzio L; Cabitza F; Gamboa H
    Comput Biol Med; 2021 Jun; 133():104393. PubMed ID: 33915362
    [TBL] [Abstract][Full Text] [Related]  

  • 7. An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients.
    Peng J; Zou K; Zhou M; Teng Y; Zhu X; Zhang F; Xu J
    J Med Syst; 2021 Apr; 45(5):61. PubMed ID: 33847850
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.
    Petch J; Di S; Nelson W
    Can J Cardiol; 2022 Feb; 38(2):204-213. PubMed ID: 34534619
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Toward explainable AI (XAI) for mental health detection based on language behavior.
    Kerz E; Zanwar S; Qiao Y; Wiechmann D
    Front Psychiatry; 2023; 14():1219479. PubMed ID: 38144474
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method.
    Le TT; Kim H; Kang H; Kim H
    Sensors (Basel); 2022 Feb; 22(3):. PubMed ID: 35161899
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Predictive modeling of consumer purchase behavior on social media: Integrating theory of planned behavior and machine learning for actionable insights.
    Azad MS; Khan SS; Hossain R; Rahman R; Momen S
    PLoS One; 2023; 18(12):e0296336. PubMed ID: 38150431
    [TBL] [Abstract][Full Text] [Related]  

  • 13. To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods.
    Amparore E; Perotti A; Bajardi P
    PeerJ Comput Sci; 2021; 7():e479. PubMed ID: 33977131
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Explaining Black-Box Models for Biomedical Text Classification.
    Moradi M; Samwald M
    IEEE J Biomed Health Inform; 2021 Aug; 25(8):3112-3120. PubMed ID: 33534720
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.
    Hatwell J; Gaber MM; Atif Azad RM
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):250. PubMed ID: 33008388
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Machine Learning-Based Water Potability Prediction Model by Using Synthetic Minority Oversampling Technique and Explainable AI.
    Patel J; Amipara C; Ahanger TA; Ladhva K; Gupta RK; Alsaab HO; Althobaiti YS; Ratna R
    Comput Intell Neurosci; 2022; 2022():9283293. PubMed ID: 36177311
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.
    Ladbury C; Zarinshenas R; Semwal H; Tam A; Vaidehi N; Rodin AS; Liu A; Glaser S; Salgia R; Amini A
    Transl Cancer Res; 2022 Oct; 11(10):3853-3868. PubMed ID: 36388027
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Understanding the black-box: towards interpretable and reliable deep learning models.
    Qamar T; Bawany NZ
    PeerJ Comput Sci; 2023; 9():e1629. PubMed ID: 38077598
    [TBL] [Abstract][Full Text] [Related]  

  • 19. How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare.
    Allgaier J; Mulansky L; Draelos RL; Pryss R
    Artif Intell Med; 2023 Sep; 143():102616. PubMed ID: 37673561
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare.
    Barda AJ; Horvat CM; Hochheiser H
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):257. PubMed ID: 33032582
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.