These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

112 related articles for article (PubMed ID: 34308439)

  • 1. Efficient Shapley Explanation For Features Importance Estimation Under Uncertainty.
    Li X; Zhou Y; Dvornek NC; Gu Y; Ventola P; Duncan JS
    Med Image Comput Comput Assist Interv; 2020; 12261():792-801. PubMed ID: 34308439
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Explanation of machine learning models using shapley additive explanation and application for real data in hospital.
    Nohara Y; Matsumoto K; Soejima H; Nakashima N
    Comput Methods Programs Biomed; 2022 Feb; 214():106584. PubMed ID: 34942412
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Fast Hierarchical Games for Image Explanations.
    Teneggi J; Luster A; Sulam J
    IEEE Trans Pattern Anal Mach Intell; 2023 Apr; 45(4):4494-4503. PubMed ID: 35816535
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Explanation-Driven Deep Learning Model for Prediction of Brain Tumour Status Using MRI Image Data.
    Gaur L; Bhandari M; Razdan T; Mallik S; Zhao Z
    Front Genet; 2022; 13():822666. PubMed ID: 35360838
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Efficient Interpretation of Deep Learning Models Using Graph Structure and Cooperative Game Theory: Application to ASD Biomarker Discovery.
    Li X; Dvornek NC; Zhou Y; Zhuang J; Ventola P; Duncan JS
    Inf Process Med Imaging; 2019 Jun; 11492():718-730. PubMed ID: 32982121
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions.
    Rodríguez-Pérez R; Bajorath J
    J Comput Aided Mol Des; 2020 Oct; 34(10):1013-1026. PubMed ID: 32361862
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Shapley variable importance cloud for interpretable machine learning.
    Ning Y; Ong MEH; Chakraborty B; Goldstein BA; Ting DSW; Vaughan R; Liu N
    Patterns (N Y); 2022 Apr; 3(4):100452. PubMed ID: 35465224
    [TBL] [Abstract][Full Text] [Related]  

  • 8. CVD22: Explainable artificial intelligence determination of the relationship of troponin to D-Dimer, mortality, and CK-MB in COVID-19 patients.
    Kırboğa KK; Küçüksille EU; Naldan ME; Işık M; Gülcü O; Aksakal E
    Comput Methods Programs Biomed; 2023 May; 233():107492. PubMed ID: 36965300
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Compensated Integrated Gradients for Reliable Explanation of Electroencephalogram Signal Classification.
    Kawai Y; Tachikawa K; Park J; Asada M
    Brain Sci; 2022 Jun; 12(7):. PubMed ID: 35884656
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Machine Learning Models Using SHapley Additive exPlanation for Fire Risk Assessment Mode and Effects Analysis of Stadiums.
    Lu Y; Fan X; Zhang Y; Wang Y; Jiang X
    Sensors (Basel); 2023 Feb; 23(4):. PubMed ID: 36850757
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Explaining multivariate molecular diagnostic tests via Shapley values.
    Roder J; Maguire L; Georgantas R; Roder H
    BMC Med Inform Decis Mak; 2021 Jul; 21(1):211. PubMed ID: 34238309
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Machine Learning Models for Predicting Influential Factors of Early Outcomes in Acute Ischemic Stroke: Registry-Based Study.
    Su PY; Wei YC; Luo H; Liu CH; Huang WY; Chen KF; Lin CP; Wei HY; Lee TH
    JMIR Med Inform; 2022 Mar; 10(3):e32508. PubMed ID: 35072631
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Construction and Interpretation of Prediction Model of Teicoplanin Trough Concentration
    Ma P; Liu R; Gu W; Dai Q; Gan Y; Cen J; Shang S; Liu F; Chen Y
    Front Med (Lausanne); 2022; 9():808969. PubMed ID: 35360734
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Combat COVID-19 infodemic using explainable natural language processing models.
    Ayoub J; Yang XJ; Zhou F
    Inf Process Manag; 2021 Jul; 58(4):102569. PubMed ID: 33776192
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Explainable deep learning predictions for illness risk of mental disorders in Nanjing, China.
    Wang C; Feng L; Qi Y
    Environ Res; 2021 Nov; 202():111740. PubMed ID: 34329635
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Explainable machine learning for chronic lymphocytic leukemia treatment prediction using only inexpensive tests.
    Meiseles A; Paley D; Ziv M; Hadid Y; Rokach L; Tadmor T
    Comput Biol Med; 2022 Jun; 145():105490. PubMed ID: 35405402
    [TBL] [Abstract][Full Text] [Related]  

  • 17. An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach.
    Nordin N; Zainol Z; Mohd Noor MH; Chan LF
    Asian J Psychiatr; 2023 Jan; 79():103316. PubMed ID: 36395702
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Learning from mistakes-Assessing the performance and uncertainty in process-based models.
    Feigl M; Roesky B; Herrnegger M; Schulz K; Hayashi M
    Hydrol Process; 2022 Feb; 36(2):e14515. PubMed ID: 35910683
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Predicting Bulk Average Velocity with Rigid Vegetation in Open Channels Using Tree-Based Machine Learning: A Novel Approach Using Explainable Artificial Intelligence.
    Meddage DPP; Ekanayake IU; Herath S; Gobirahavan R; Muttil N; Rathnayake U
    Sensors (Basel); 2022 Jun; 22(12):. PubMed ID: 35746184
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Feature importance: Opening a soil-transmitted helminth machine learning model via SHAP.
    Scavuzzo CM; Scavuzzo JM; Campero MN; Anegagrie M; Aramendia AA; Benito A; Periago V
    Infect Dis Model; 2022 Mar; 7(1):262-276. PubMed ID: 35224316
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.