These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

123 related articles for article (PubMed ID: 38529008)

  • 1. X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model.
    Jeyakumar JV; Sarker A; Garcia LA; Srivastava M
    Proc ACM Interact Mob Wearable Ubiquitous Technol; 2023 Mar; 7(1):. PubMed ID: 38529008
    [TBL] [Abstract][Full Text] [Related]  

  • 2. CEFEs: A CNN Explainable Framework for ECG Signals.
    Maweu BM; Dakshit S; Shamsuddin R; Prabhakaran B
    Artif Intell Med; 2021 May; 115():102059. PubMed ID: 34001319
    [TBL] [Abstract][Full Text] [Related]  

  • 3. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.
    Lucieri A; Bajwa MN; Braun SA; Malik MI; Dengel A; Ahmed S
    Comput Methods Programs Biomed; 2022 Mar; 215():106620. PubMed ID: 35033756
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Toward explainable AI-empowered cognitive health assessment.
    Javed AR; Khan HU; Alomari MKB; Sarwar MU; Asim M; Almadhor AS; Khan MZ
    Front Public Health; 2023; 11():1024195. PubMed ID: 36969684
    [TBL] [Abstract][Full Text] [Related]  

  • 5. An Explainable EEG-Based Human Activity Recognition Model Using Machine-Learning Approach and LIME.
    Hussain I; Jany R; Boyer R; Azad A; Alyami SA; Park SJ; Hasan MM; Hossain MA
    Sensors (Basel); 2023 Aug; 23(17):. PubMed ID: 37687908
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Concept-based AI interpretability in physiological time-series data: Example of abnormality detection in electroencephalography.
    Brenner A; Knispel F; Fischer FP; Rossmanith P; Weber Y; Koch H; Röhrig R; Varghese J; Kutafina E
    Comput Methods Programs Biomed; 2024 Sep; 257():108448. PubMed ID: 39395304
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI.
    Lucieri A; Dengel A; Ahmed S
    Front Bioinform; 2023; 3():1194993. PubMed ID: 37484865
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Explainable Connectionist-Temporal-Classification-Based Scene Text Recognition.
    Buoy R; Iwamura M; Srun S; Kise K
    J Imaging; 2023 Nov; 9(11):. PubMed ID: 37998095
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.
    Fernandes GJ; Choi A; Schauer JM; Pfammatter AF; Spring BJ; Darwiche A; Alshurafa NI
    J Med Internet Res; 2023 Sep; 25():e42047. PubMed ID: 37672333
    [TBL] [Abstract][Full Text] [Related]  

  • 11. XAI-FR: Explainable AI-Based Face Recognition Using Deep Neural Networks.
    Rajpal A; Sehra K; Bagri R; Sikka P
    Wirel Pers Commun; 2023; 129(1):663-680. PubMed ID: 36531522
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Explainable deep learning ensemble for food image analysis on edge devices.
    Tahir GA; Loo CK
    Comput Biol Med; 2021 Dec; 139():104972. PubMed ID: 34749093
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Explainable, trustworthy, and ethical machine learning for healthcare: A survey.
    Rasheed K; Qayyum A; Ghaly M; Al-Fuqaha A; Razi A; Qadir J
    Comput Biol Med; 2022 Oct; 149():106043. PubMed ID: 36115302
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Toward explainable AI (XAI) for mental health detection based on language behavior.
    Kerz E; Zanwar S; Qiao Y; Wiechmann D
    Front Psychiatry; 2023; 14():1219479. PubMed ID: 38144474
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Explaining the black-box smoothly-A counterfactual approach.
    Singla S; Eslami M; Pollack B; Wallace S; Batmanghelich K
    Med Image Anal; 2023 Feb; 84():102721. PubMed ID: 36571975
    [TBL] [Abstract][Full Text] [Related]  

  • 16. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.
    Wani NA; Kumar R; Bedi J
    Comput Methods Programs Biomed; 2024 Jan; 243():107879. PubMed ID: 37897989
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Concept-based Lesion Aware Transformer for Interpretable Retinal Disease Diagnosis.
    Wen C; Ye M; Li H; Chen T; Xiao X
    IEEE Trans Med Imaging; 2024 Jul; PP():. PubMed ID: 39012729
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A Novel Framework Based on Deep Learning Architecture for Continuous Human Activity Recognition with Inertial Sensors.
    Suglia V; Palazzo L; Bevilacqua V; Passantino A; Pagano G; D'Addio G
    Sensors (Basel); 2024 Mar; 24(7):. PubMed ID: 38610410
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Object recognition in medical images via anatomy-guided deep learning.
    Jin C; Udupa JK; Zhao L; Tong Y; Odhner D; Pednekar G; Nag S; Lewis S; Poole N; Mannikeri S; Govindasamy S; Singh A; Camaratta J; Owens S; Torigian DA
    Med Image Anal; 2022 Oct; 81():102527. PubMed ID: 35830745
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.
    Holzinger A; Carrington A; Müller H
    Kunstliche Intell (Oldenbourg); 2020; 34(2):193-198. PubMed ID: 32549653
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.