These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

143 related articles for article (PubMed ID: 39372662)

  • 1. A global model-agnostic rule-based XAI method based on Parameterized Event Primitives for time series classifiers.
    Mekonnen ET; Longo L; Dondio P
    Front Artif Intell; 2024; 7():1381921. PubMed ID: 39372662
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Explainable AI: Machine Learning Interpretation in Blackcurrant Powders.
    Przybył K
    Sensors (Basel); 2024 May; 24(10):. PubMed ID: 38794052
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.
    Nambiar A; S H; S S
    Front Artif Intell; 2023; 6():1272506. PubMed ID: 38111787
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study.
    Chen H; Cohen E; Wilson D; Alfred M
    JMIR Hum Factors; 2024 Jan; 11():e53378. PubMed ID: 38271086
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods.
    Vilone G; Longo L
    Front Artif Intell; 2021; 4():717899. PubMed ID: 34805973
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Toward explainable AI (XAI) for mental health detection based on language behavior.
    Kerz E; Zanwar S; Qiao Y; Wiechmann D
    Front Psychiatry; 2023; 14():1219479. PubMed ID: 38144474
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences.
    Hatwell J; Gaber MM; Atif Azad RM
    BMC Med Inform Decis Mak; 2020 Oct; 20(1):250. PubMed ID: 33008388
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Understanding the black-box: towards interpretable and reliable deep learning models.
    Qamar T; Bawany NZ
    PeerJ Comput Sci; 2023; 9():e1629. PubMed ID: 38077598
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Exploring Explainable AI Techniques for Text Classification in Healthcare: A Scoping Review.
    Madi IAE; Redjdal A; Bouaud J; Seroussi B
    Stud Health Technol Inform; 2024 Aug; 316():846-850. PubMed ID: 39176925
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions.
    Parimbelli E; Buonocore TM; Nicora G; Michalowski W; Wilk S; Bellazzi R
    Artif Intell Med; 2023 Jan; 135():102471. PubMed ID: 36628785
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.
    Ladbury C; Zarinshenas R; Semwal H; Tam A; Vaidehi N; Rodin AS; Liu A; Glaser S; Salgia R; Amini A
    Transl Cancer Res; 2022 Oct; 11(10):3853-3868. PubMed ID: 36388027
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Toward explainable AI-empowered cognitive health assessment.
    Javed AR; Khan HU; Alomari MKB; Sarwar MU; Asim M; Almadhor AS; Khan MZ
    Front Public Health; 2023; 11():1024195. PubMed ID: 36969684
    [TBL] [Abstract][Full Text] [Related]  

  • 13. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.
    Wani NA; Kumar R; Bedi J
    Comput Methods Programs Biomed; 2024 Jan; 243():107879. PubMed ID: 37897989
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI.
    Chraibi Kaadoud I; Bennetot A; Mawhin B; Charisi V; Díaz-Rodríguez N
    Neural Netw; 2022 Nov; 155():95-118. PubMed ID: 36049396
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Interpretable heartbeat classification using local model-agnostic explanations on ECGs.
    Neves I; Folgado D; Santos S; Barandas M; Campagner A; Ronzio L; Cabitza F; Gamboa H
    Comput Biol Med; 2021 Jun; 133():104393. PubMed ID: 33915362
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Neuro-XAI: Explainable deep learning framework based on deeplabV3+ and bayesian optimization for segmentation and classification of brain tumor in MRI scans.
    Saeed T; Khan MA; Hamza A; Shabaz M; Khan WZ; Alhayan F; Jamel L; Baili J
    J Neurosci Methods; 2024 Oct; 410():110247. PubMed ID: 39128599
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Multimodal brain tumor segmentation and classification from MRI scans based on optimized DeepLabV3+ and interpreted networks information fusion empowered with explainable AI.
    Ullah MS; Khan MA; Albarakati HM; Damaševičius R; Alsenan S
    Comput Biol Med; 2024 Oct; 182():109183. PubMed ID: 39357134
    [TBL] [Abstract][Full Text] [Related]  

  • 18. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.
    Ali S; Akhlaq F; Imran AS; Kastrati Z; Daudpota SM; Moosa M
    Comput Biol Med; 2023 Nov; 166():107555. PubMed ID: 37806061
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.
    Lamba K; Rani S
    J Neurosci Methods; 2024 Aug; 408():110159. PubMed ID: 38723868
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification.
    Yuan H; Hong C; Jiang PT; Zhao G; Tran NTA; Xu X; Yan YY; Liu N
    J Biomed Inform; 2024 Aug; 156():104673. PubMed ID: 38862083
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.