These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

143 related articles for article (PubMed ID: 35200732)

  • 1. Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach.
    Sudars K; Namatēvs I; Ozols K
    J Imaging; 2022 Jan; 8(2):. PubMed ID: 35200732
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Toward explainable AI-empowered cognitive health assessment.
    Javed AR; Khan HU; Alomari MKB; Sarwar MU; Asim M; Almadhor AS; Khan MZ
    Front Public Health; 2023; 11():1024195. PubMed ID: 36969684
    [TBL] [Abstract][Full Text] [Related]  

  • 3. FP-CNN: Fuzzy pooling-based convolutional neural network for lung ultrasound image classification with explainable AI.
    Hasan MM; Hossain MM; Rahman MM; Azad A; Alyami SA; Moni MA
    Comput Biol Med; 2023 Oct; 165():107407. PubMed ID: 37678140
    [TBL] [Abstract][Full Text] [Related]  

  • 4. CNN-BLSTM based deep learning framework for eukaryotic kinome classification: An explainability based approach.
    John C; Sahoo J; Sajan IK; Madhavan M; Mathew OK
    Comput Biol Chem; 2024 Aug; 112():108169. PubMed ID: 39137619
    [TBL] [Abstract][Full Text] [Related]  

  • 5. fMRI volume classification using a 3D convolutional neural network robust to shifted and scaled neuronal activations.
    Vu H; Kim HC; Jung M; Lee JH
    Neuroimage; 2020 Dec; 223():117328. PubMed ID: 32896633
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Non-transfer Deep Learning of Optical Coherence Tomography for Post-hoc Explanation of Macular Disease Classification.
    Arefin R; Samad MD; Akyelken FA; Davanian A
    Proc (IEEE Int Conf Healthc Inform); 2021 Aug; 2021():48-52. PubMed ID: 36168324
    [TBL] [Abstract][Full Text] [Related]  

  • 7. On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems.
    Nazat S; Arreche O; Abdallah M
    Sensors (Basel); 2024 May; 24(11):. PubMed ID: 38894306
    [TBL] [Abstract][Full Text] [Related]  

  • 8. CEFEs: A CNN Explainable Framework for ECG Signals.
    Maweu BM; Dakshit S; Shamsuddin R; Prabhakaran B
    Artif Intell Med; 2021 May; 115():102059. PubMed ID: 34001319
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Non-human primate epidural ECoG analysis using explainable deep learning technology.
    Choi H; Lim S; Min K; Ahn KH; Lee KM; Jang DP
    J Neural Eng; 2021 Nov; 18(6):. PubMed ID: 34695809
    [No Abstract]   [Full Text] [Related]  

  • 10. Explanatory pragmatism: a context-sensitive framework for explainable medical AI.
    Nyrup R; Robinson D
    Ethics Inf Technol; 2022; 24(1):13. PubMed ID: 35250370
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.
    Borys K; Schmitt YA; Nauta M; Seifert C; Krämer N; Friedrich CM; Nensa F
    Eur J Radiol; 2023 May; 162():110787. PubMed ID: 37001254
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization.
    Papadimitroulas P; Brocki L; Christopher Chung N; Marchadour W; Vermet F; Gaubert L; Eleftheriadis V; Plachouris D; Visvikis D; Kagadis GC; Hatt M
    Phys Med; 2021 Mar; 83():108-121. PubMed ID: 33765601
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Traffic sign classification using CNN and detection using faster-RCNN and YOLOV4.
    Youssouf N
    Heliyon; 2022 Dec; 8(12):e11792. PubMed ID: 36471847
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging.
    Alkhalaf S; Alturise F; Bahaddad AA; Elnaim BME; Shabana S; Abdel-Khalek S; Mansour RF
    Cancers (Basel); 2023 Feb; 15(5):. PubMed ID: 36900283
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.
    Markus AF; Kors JA; Rijnbeek PR
    J Biomed Inform; 2021 Jan; 113():103655. PubMed ID: 33309898
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Efficient mapping of crash risk at intersections with connected vehicle data and deep learning models.
    Hu J; Huang MC; Yu X
    Accid Anal Prev; 2020 Sep; 144():105665. PubMed ID: 32683130
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Fairness-related performance and explainability effects in deep learning models for brain image analysis.
    Stanley EAM; Wilms M; Mouches P; Forkert ND
    J Med Imaging (Bellingham); 2022 Nov; 9(6):061102. PubMed ID: 36046104
    [No Abstract]   [Full Text] [Related]  

  • 18. Discrimination of unsound wheat kernels based on deep convolutional generative adversarial network and near-infrared hyperspectral imaging technology.
    Li H; Zhang L; Sun H; Rao Z; Ji H
    Spectrochim Acta A Mol Biomol Spectrosc; 2022 Mar; 268():120722. PubMed ID: 34902690
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.
    Arcos-García Á; Álvarez-García JA; Soria-Morillo LM
    Neural Netw; 2018 Mar; 99():158-165. PubMed ID: 29427842
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Explaining decisions of a light-weight deep neural network for real-time coronary artery disease classification in magnetic resonance imaging.
    Iqbal T; Khalid A; Ullah I
    J Real Time Image Process; 2024; 21(2):31. PubMed ID: 38348346
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.