BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

127 related articles for article (PubMed ID: 38889117)

  • 1. CAManim: Animating end-to-end network activation maps.
    Kaczmarek E; Miguel OX; Bowie AC; Ducharme R; Dingwall-Harvey ALJ; Hawken S; Armour CM; Walker MC; Dick K
    PLoS One; 2024; 19(6):e0296985. PubMed ID: 38889117
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare.
    Lamba K; Rani S
    J Neurosci Methods; 2024 Aug; 408():110159. PubMed ID: 38723868
    [TBL] [Abstract][Full Text] [Related]  

  • 3. DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence.
    Wani NA; Kumar R; Bedi J
    Comput Methods Programs Biomed; 2024 Jan; 243():107879. PubMed ID: 37897989
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Interpretable Artificial Intelligence through Locality Guided Neural Networks.
    Tan R; Gao L; Khan N; Guan L
    Neural Netw; 2022 Nov; 155():58-73. PubMed ID: 36041281
    [TBL] [Abstract][Full Text] [Related]  

  • 5. An explainable deep-learning model to stage sleep states in children and propose novel EEG-related patterns in sleep apnea.
    Vaquerizo-Villar F; Gutiérrez-Tobal GC; Calvo E; Álvarez D; Kheirandish-Gozal L; Del Campo F; Gozal D; Hornero R
    Comput Biol Med; 2023 Oct; 165():107419. PubMed ID: 37703716
    [TBL] [Abstract][Full Text] [Related]  

  • 6. CEFEs: A CNN Explainable Framework for ECG Signals.
    Maweu BM; Dakshit S; Shamsuddin R; Prabhakaran B
    Artif Intell Med; 2021 May; 115():102059. PubMed ID: 34001319
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Explaining and Visualizing Embeddings of One-Dimensional Convolutional Models in Human Activity Recognition Tasks.
    Aquino G; Costa MGF; Filho CFFC
    Sensors (Basel); 2023 Apr; 23(9):. PubMed ID: 37177616
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks.
    Nazir S; Dickson DM; Akram MU
    Comput Biol Med; 2023 Apr; 156():106668. PubMed ID: 36863192
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Human attention guided explainable artificial intelligence for computer vision models.
    Liu G; Zhang J; Chan AB; Hsiao JH
    Neural Netw; 2024 Sep; 177():106392. PubMed ID: 38788290
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A walk in the black-box: 3D visualization of large neural networks in virtual reality.
    Linse C; Alshazly H; Martinetz T
    Neural Comput Appl; 2022; 34(23):21237-21252. PubMed ID: 35996678
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing.
    Zhang H; Ogasawara K
    Bioengineering (Basel); 2023 Sep; 10(9):. PubMed ID: 37760173
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Explainable AI for Bioinformatics: Methods, Tools and Applications.
    Karim MR; Islam T; Shajalal M; Beyan O; Lange C; Cochez M; Rebholz-Schuhmann D; Decker S
    Brief Bioinform; 2023 Sep; 24(5):. PubMed ID: 37478371
    [TBL] [Abstract][Full Text] [Related]  

  • 13. What Does Deep Learning See? Insights From a Classifier Trained to Predict Contrast Enhancement Phase From CT Images.
    Philbrick KA; Yoshida K; Inoue D; Akkus Z; Kline TL; Weston AD; Korfiatis P; Takahashi N; Erickson BJ
    AJR Am J Roentgenol; 2018 Dec; 211(6):1184-1193. PubMed ID: 30403527
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Explaining decisions of graph convolutional neural networks: patient-specific molecular subnetworks responsible for metastasis prediction in breast cancer.
    Chereda H; Bleckmann A; Menck K; Perera-Bel J; Stegmaier P; Auer F; Kramer F; Leha A; Beißbarth T
    Genome Med; 2021 Mar; 13(1):42. PubMed ID: 33706810
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Explainability of deep neural networks for MRI analysis of brain tumors.
    Zeineldin RA; Karar ME; Elshaer Z; Coburger J; Wirtz CR; Burgert O; Mathis-Ullrich F
    Int J Comput Assist Radiol Surg; 2022 Sep; 17(9):1673-1683. PubMed ID: 35460019
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Evaluation of interpretability for deep learning algorithms in EEG emotion recognition: A case study in autism.
    Mayor Torres JM; Medina-DeVilliers S; Clarkson T; Lerner MD; Riccardi G
    Artif Intell Med; 2023 Sep; 143():102545. PubMed ID: 37673554
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Semantic Interpretation for Convolutional Neural Networks: What Makes a Cat a Cat?
    Xu H; Chen Y; Zhang D
    Adv Sci (Weinh); 2022 Dec; 9(35):e2204723. PubMed ID: 36216585
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Deep convolutional neural network and IoT technology for healthcare.
    Wassan S; Dongyan H; Suhail B; Jhanjhi NZ; Xiao G; Ahmed S; Murugesan RK
    Digit Health; 2024; 10():20552076231220123. PubMed ID: 38250147
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features.
    Wang CJ; Hamm CA; Savic LJ; Ferrante M; Schobert I; Schlachter T; Lin M; Weinreb JC; Duncan JS; Chapiro J; Letzen B
    Eur Radiol; 2019 Jul; 29(7):3348-3357. PubMed ID: 31093705
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Explainable deep drug-target representations for binding affinity prediction.
    Monteiro NRC; Simões CJV; Ávila HV; Abbasi M; Oliveira JL; Arrais JP
    BMC Bioinformatics; 2022 Jun; 23(1):237. PubMed ID: 35715734
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.