These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

130 related articles for article (PubMed ID: 37266485)

  • 1. GLACIER: GLASS-BOX TRANSFORMER FOR INTERPRETABLE DYNAMIC NEUROIMAGING.
    Mahmood U; Fu Z; Calhoun V; Plis S
    Proc IEEE Int Conf Acoust Speech Signal Process; 2023 Jun; 2023():. PubMed ID: 37266485
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Through the looking glass: Deep interpretable dynamic directed connectivity in resting fMRI.
    Mahmood U; Fu Z; Ghosh S; Calhoun V; Plis S
    Neuroimage; 2022 Dec; 264():119737. PubMed ID: 36356823
    [TBL] [Abstract][Full Text] [Related]  

  • 3. MGRW-Transformer: Multigranularity Random Walk Transformer Model for Interpretable Learning.
    Ding W; Geng Y; Huang J; Ju H; Wang H; Lin CT
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37938954
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Multimodal Data Fusion of Deep Learning and Dynamic Functional Connectivity Features to Predict Alzheimer's Disease Progression
    Abrol A; Fu Z; Du Y; Calhoun VD
    Annu Int Conf IEEE Eng Med Biol Soc; 2019 Jul; 2019():4409-4413. PubMed ID: 31946844
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Retrosynthesis prediction with an interpretable deep-learning framework based on molecular assembly tasks.
    Wang Y; Pang C; Wang Y; Jin J; Zhang J; Zeng X; Su R; Zou Q; Wei L
    Nat Commun; 2023 Oct; 14(1):6155. PubMed ID: 37788995
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Interpretable neural networks: principles and applications.
    Liu Z; Xu F
    Front Artif Intell; 2023; 6():974295. PubMed ID: 37899962
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Deep Network Model on Dynamic Functional Connectivity With Applications to Gender Classification and Intelligence Prediction.
    Fan L; Su J; Qin J; Hu D; Shen H
    Front Neurosci; 2020; 14():881. PubMed ID: 33013292
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Information bottleneck-based interpretable multitask network for breast cancer classification and segmentation.
    Wang J; Zheng Y; Ma J; Li X; Wang C; Gee J; Wang H; Huang W
    Med Image Anal; 2023 Jan; 83():102687. PubMed ID: 36436356
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Development of prediction models for one-year brain tumour survival using machine learning: a comparison of accuracy and interpretability.
    Charlton CE; Poon MTC; Brennan PM; Fleuriot JD
    Comput Methods Programs Biomed; 2023 May; 233():107482. PubMed ID: 36947980
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Interpretable clinical prediction via attention-based neural network.
    Chen P; Dong W; Wang J; Lu X; Kaymak U; Huang Z
    BMC Med Inform Decis Mak; 2020 Jul; 20(Suppl 3):131. PubMed ID: 32646437
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Interpretable deep learning architectures for improving drug response prediction performance: myth or reality?
    Li Y; Hostallero DE; Emad A
    Bioinformatics; 2023 Jun; 39(6):. PubMed ID: 37326960
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Imaging Connectomics and the Understanding of Brain Diseases.
    Insabato A; Deco G; Gilson M
    Adv Exp Med Biol; 2019; 1192():139-158. PubMed ID: 31705494
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Deep sr-DDL: Deep structurally regularized dynamic dictionary learning to integrate multimodal and dynamic functional connectomics data for multidimensional clinical characterizations.
    D'Souza NS; Nebel MB; Crocetti D; Robinson J; Wymbs N; Mostofsky SH; Venkataraman A
    Neuroimage; 2021 Nov; 241():118388. PubMed ID: 34271159
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Transformer for Gene Expression Modeling (T-GEM): An Interpretable Deep Learning Model for Gene Expression-Based Phenotype Predictions.
    Zhang TH; Hasib MM; Chiu YC; Han ZF; Jin YF; Flores M; Chen Y; Huang Y
    Cancers (Basel); 2022 Sep; 14(19):. PubMed ID: 36230685
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks.
    Leming M; Górriz JM; Suckling J
    Int J Neural Syst; 2020 Jul; 30(7):2050012. PubMed ID: 32308082
    [TBL] [Abstract][Full Text] [Related]  

  • 17. MINDWALC: mining interpretable, discriminative walks for classification of nodes in a knowledge graph.
    Vandewiele G; Steenwinckel B; Turck F; Ongenae F
    BMC Med Inform Decis Mak; 2020 Dec; 20(Suppl 4):191. PubMed ID: 33317504
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Multi-label classification of Alzheimer's disease stages from resting-state fMRI-based correlation connectivity data and deep learning.
    Alorf A; Khan MUG
    Comput Biol Med; 2022 Dec; 151(Pt A):106240. PubMed ID: 36423532
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A Self-Interpretable Deep Learning Model for Seizure Prediction Using a Multi-Scale Prototypical Part Network.
    Gao Y; Liu A; Wang L; Qian R; Chen X
    IEEE Trans Neural Syst Rehabil Eng; 2023; 31():1847-1856. PubMed ID: 37030672
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Learning Cognitive-Test-Based Interpretable Rules for Prediction and Early Diagnosis of Dementia Using Neural Networks.
    Wang Z; Wang J; Liu N; Liu C; Li X; Dong L; Zhang R; Mao C; Duan Z; Zhang W; Gao J; Wang J;
    J Alzheimers Dis; 2022; 90(2):609-624. PubMed ID: 36155512
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.