BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

410 related articles for article (PubMed ID: 32361862)

  • 1. Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions.
    Rodríguez-Pérez R; Bajorath J
    J Comput Aided Mol Des; 2020 Oct; 34(10):1013-1026. PubMed ID: 32361862
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values.
    Rodríguez-Pérez R; Bajorath J
    J Med Chem; 2020 Aug; 63(16):8761-8777. PubMed ID: 31512867
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Evaluation of multi-target deep neural network models for compound potency prediction under increasingly challenging test conditions.
    Rodríguez-Pérez R; Bajorath J
    J Comput Aided Mol Des; 2021 Mar; 35(3):285-295. PubMed ID: 33598870
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method.
    Le TT; Kim H; Kang H; Kim H
    Sensors (Basel); 2022 Feb; 22(3):. PubMed ID: 35161899
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Explaining multivariate molecular diagnostic tests via Shapley values.
    Roder J; Maguire L; Georgantas R; Roder H
    BMC Med Inform Decis Mak; 2021 Jul; 21(1):211. PubMed ID: 34238309
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Calculation of exact Shapley values for support vector machines with Tanimoto kernel enables model interpretation.
    Feldmann C; Bajorath J
    iScience; 2022 Sep; 25(9):105023. PubMed ID: 36105596
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Calculation of exact Shapley values for explaining support vector machine models using the radial basis function kernel.
    Mastropietro A; Feldmann C; Bajorath J
    Sci Rep; 2023 Nov; 13(1):19561. PubMed ID: 37949930
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Explaining Multiclass Compound Activity Predictions Using Counterfactuals and Shapley Values.
    Lamens A; Bajorath J
    Molecules; 2023 Jul; 28(14):. PubMed ID: 37513472
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Explanation of machine learning models using shapley additive explanation and application for real data in hospital.
    Nohara Y; Matsumoto K; Soejima H; Nakashima N
    Comput Methods Programs Biomed; 2022 Feb; 214():106584. PubMed ID: 34942412
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach.
    Nordin N; Zainol Z; Mohd Noor MH; Chan LF
    Asian J Psychiatr; 2023 Jan; 79():103316. PubMed ID: 36395702
    [TBL] [Abstract][Full Text] [Related]  

  • 11. GPUTreeShap: massively parallel exact calculation of SHAP scores for tree ensembles.
    Mitchell R; Frank E; Holmes G
    PeerJ Comput Sci; 2022; 8():e880. PubMed ID: 35494875
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Machine Learning Models for Predicting Influential Factors of Early Outcomes in Acute Ischemic Stroke: Registry-Based Study.
    Su PY; Wei YC; Luo H; Liu CH; Huang WY; Chen KF; Lin CP; Wei HY; Lee TH
    JMIR Med Inform; 2022 Mar; 10(3):e32508. PubMed ID: 35072631
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Verifying explainability of a deep learning tissue classifier trained on RNA-seq data.
    Yap M; Johnston RL; Foley H; MacDonald S; Kondrashova O; Tran KA; Nones K; Koufariotis LT; Bean C; Pearson JV; Trzaskowski M; Waddell N
    Sci Rep; 2021 Jan; 11(1):2641. PubMed ID: 33514769
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Explaining machine-learning models for gamma-ray detection and identification.
    Bandstra MS; Curtis JC; Ghawaly JM; Jones AC; Joshi THY
    PLoS One; 2023; 18(6):e0286829. PubMed ID: 37339151
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Survival prediction of glioblastoma patients using modern deep learning and machine learning techniques.
    Babaei Rikan S; Sorayaie Azar A; Naemi A; Bagherzadeh Mohasefi J; Pirnejad H; Wiil UK
    Sci Rep; 2024 Jan; 14(1):2371. PubMed ID: 38287149
    [TBL] [Abstract][Full Text] [Related]  

  • 16. An artificial neural network-pharmacokinetic model and its interpretation using Shapley additive explanations.
    Ogami C; Tsuji Y; Seki H; Kawano H; To H; Matsumoto Y; Hosono H
    CPT Pharmacometrics Syst Pharmacol; 2021 Jul; 10(7):760-768. PubMed ID: 33955705
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stable feature selection utilizing Graph Convolutional Neural Network and Layer-wise Relevance Propagation for biomarker discovery in breast cancer.
    Chereda H; Leha A; Beißbarth T
    Artif Intell Med; 2024 May; 151():102840. PubMed ID: 38658129
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Fast Hierarchical Games for Image Explanations.
    Teneggi J; Luster A; Sulam J
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4494-4503. PubMed ID: 35816535
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Breast cancer molecular subtype prediction: Improving interpretability of complex machine-learning models based on multiparametric-MRI features using SHapley Additive exPlanations (SHAP) methodology.
    Crombé A; Kataoka M
    Diagn Interv Imaging; 2024 May; 105(5):161-162. PubMed ID: 38365542
    [No Abstract]   [Full Text] [Related]  

  • 20. On the interpretability of machine learning methods in crash frequency modeling and crash modification factor development.
    Wen X; Xie Y; Jiang L; Li Y; Ge T
    Accid Anal Prev; 2022 Apr; 168():106617. PubMed ID: 35202941
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 21.