These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

125 related articles for article (PubMed ID: 37217691)

  • 1. Theory and rationale of interpretable all-in-one pattern discovery and disentanglement system.
    Wong AKC; Zhou PY; Lee AE
    NPJ Digit Med; 2023 May; 6(1):92. PubMed ID: 37217691
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Explanation and prediction of clinical data with imbalanced class distribution based on pattern discovery and disentanglement.
    Zhou PY; Wong AKC
    BMC Med Inform Decis Mak; 2021 Jan; 21(1):16. PubMed ID: 33422088
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Pattern discovery and disentanglement on relational datasets.
    Wong AKC; Zhou PY; Butt ZA
    Sci Rep; 2021 Mar; 11(1):5688. PubMed ID: 33707478
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Discovery and disentanglement of aligned residue associations from aligned pattern clusters to reveal subgroup characteristics.
    Zhou PY; Sze-To A; Wong AKC
    BMC Med Genomics; 2018 Nov; 11(Suppl 5):103. PubMed ID: 30453949
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Revealing Subtle Functional Subgroups in Class A Scavenger Receptors by Pattern Discovery and Disentanglement of Aligned Pattern Clusters.
    Zhou PY; Lee EA; Sze-To A; Wong AKC
    Proteomes; 2018 Feb; 6(1):. PubMed ID: 29419792
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Imbalanced target prediction with pattern discovery on clinical data repositories.
    Chan TM; Li Y; Chiau CC; Zhu J; Jiang J; Huo Y
    BMC Med Inform Decis Mak; 2017 Apr; 17(1):47. PubMed ID: 28427384
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning.
    Marconato E; Passerini A; Teso S
    Entropy (Basel); 2023 Nov; 25(12):. PubMed ID: 38136454
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Disentangling the latent space of GANs for semantic face editing.
    Niu Y; Zhou M; Li Z
    PLoS One; 2023; 18(10):e0293496. PubMed ID: 37883462
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Development of prediction models for one-year brain tumour survival using machine learning: a comparison of accuracy and interpretability.
    Charlton CE; Poon MTC; Brennan PM; Fleuriot JD
    Comput Methods Programs Biomed; 2023 May; 233():107482. PubMed ID: 36947980
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction.
    Cornille N; Laenen K; Sun J; Moens MF
    Entropy (Basel); 2023 Nov; 25(11):. PubMed ID: 37998247
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals.
    Bienefeld N; Boss JM; Lüthy R; Brodbeck D; Azzati J; Blaser M; Willms J; Keller E
    NPJ Digit Med; 2023 May; 6(1):94. PubMed ID: 37217779
    [TBL] [Abstract][Full Text] [Related]  

  • 12. TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation.
    Li J; Li Y; Xiang X; Xia ST; Dong S; Cai Y
    Entropy (Basel); 2020 Oct; 22(11):. PubMed ID: 33286971
    [TBL] [Abstract][Full Text] [Related]  

  • 13. No silver bullet: interpretable ML models must be explained.
    Marques-Silva J; Ignatiev A
    Front Artif Intell; 2023; 6():1128212. PubMed ID: 37168320
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.
    Pintelas E; Liaskos M; Livieris IE; Kotsiantis S; Pintelas P
    J Imaging; 2020 May; 6(6):. PubMed ID: 34460583
    [TBL] [Abstract][Full Text] [Related]  

  • 15. A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.
    Graziani M; Dutkiewicz L; Calvaresi D; Amorim JP; Yordanova K; Vered M; Nair R; Abreu PH; Blanke T; Pulignano V; Prior JO; Lauwaert L; Reijers W; Depeursinge A; Andrearczyk V; Müller H
    Artif Intell Rev; 2023; 56(4):3473-3504. PubMed ID: 36092822
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Explainable, trustworthy, and ethical machine learning for healthcare: A survey.
    Rasheed K; Qayyum A; Ghaly M; Al-Fuqaha A; Razi A; Qadir J
    Comput Biol Med; 2022 Oct; 149():106043. PubMed ID: 36115302
    [TBL] [Abstract][Full Text] [Related]  

  • 17. CauRuler: Causal irredundant association rule miner for complex patient trajectory modelling.
    Guillamet GH; Seguí FL; Vidal-Alaball J; López B
    Comput Biol Med; 2023 Mar; 155():106636. PubMed ID: 36780801
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Towards a Knowledge Graph-Based Explainable Decision Support System in Healthcare.
    Rajabi E; Etminani K
    Stud Health Technol Inform; 2021 May; 281():502-503. PubMed ID: 34042621
    [TBL] [Abstract][Full Text] [Related]  

  • 19. An Urban Population Health Observatory for Disease Causal Pathway Analysis and Decision Support: Underlying Explainable Artificial Intelligence Model.
    Brakefield WS; Ammar N; Shaban-Nejad A
    JMIR Form Res; 2022 Jul; 6(7):e36055. PubMed ID: 35857363
    [TBL] [Abstract][Full Text] [Related]  

  • 20.
    ; ; . PubMed ID:
    [No Abstract]   [Full Text] [Related]  

    [Next]    [New Search]
    of 7.