These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

216 related articles for article (PubMed ID: 34372436)

  • 1. Explainable Anomaly Detection Framework for Maritime Main Engine Sensor Data.
    Kim D; Antariksa G; Handayani MP; Lee S; Lee J
    Sensors (Basel); 2021 Jul; 21(15):. PubMed ID: 34372436
    [TBL] [Abstract][Full Text] [Related]  

  • 2. An Ensemble-Based Approach to Anomaly Detection in Marine Engine Sensor Streams for Efficient Condition Monitoring and Analysis.
    Kim D; Lee S; Lee J
    Sensors (Basel); 2020 Dec; 20(24):. PubMed ID: 33353051
    [TBL] [Abstract][Full Text] [Related]  

  • 3. An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach.
    Nordin N; Zainol Z; Mohd Noor MH; Chan LF
    Asian J Psychiatr; 2023 Jan; 79():103316. PubMed ID: 36395702
    [TBL] [Abstract][Full Text] [Related]  

  • 4. On Evaluating Black-Box Explainable AI Methods for Enhancing Anomaly Detection in Autonomous Driving Systems.
    Nazat S; Arreche O; Abdallah M
    Sensors (Basel); 2024 May; 24(11):. PubMed ID: 38894306
    [TBL] [Abstract][Full Text] [Related]  

  • 5. An interpretable machine learning method for supporting ecosystem management: Application to species distribution models of freshwater macroinvertebrates.
    Cha Y; Shin J; Go B; Lee DS; Kim Y; Kim T; Park YS
    J Environ Manage; 2021 Aug; 291():112719. PubMed ID: 33946026
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method.
    Le TT; Kim H; Kang H; Kim H
    Sensors (Basel); 2022 Feb; 22(3):. PubMed ID: 35161899
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Interpretation of ensemble learning to predict water quality using explainable artificial intelligence.
    Park J; Lee WH; Kim KT; Park CY; Lee S; Heo TY
    Sci Total Environ; 2022 Aug; 832():155070. PubMed ID: 35398119
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions.
    Rodríguez-Pérez R; Bajorath J
    J Comput Aided Mol Des; 2020 Oct; 34(10):1013-1026. PubMed ID: 32361862
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Explainable AI-driven model for gastrointestinal cancer classification.
    Binzagr F
    Front Med (Lausanne); 2024; 11():1349373. PubMed ID: 38686367
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An Ensemble Approach for the Prediction of Diabetes Mellitus Using a Soft Voting Classifier with an Explainable AI.
    Kibria HB; Nahiduzzaman M; Goni MOF; Ahsan M; Haider J
    Sensors (Basel); 2022 Sep; 22(19):. PubMed ID: 36236367
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Understanding risk factors for postoperative mortality in neonates based on explainable machine learning technology.
    Hu Y; Gong X; Shu L; Zeng X; Duan H; Luo Q; Zhang B; Ji Y; Wang X; Shu Q; Li H
    J Pediatr Surg; 2021 Dec; 56(12):2165-2171. PubMed ID: 33863558
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Towards better process management in wastewater treatment plants: Process analytics based on SHAP values for tree-based machine learning methods.
    Wang D; Thunéll S; Lindberg U; Jiang L; Trygg J; Tysklind M
    J Environ Manage; 2022 Jan; 301():113941. PubMed ID: 34731954
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Interpretable and explainable AI (XAI) model for spatial drought prediction.
    Dikshit A; Pradhan B
    Sci Total Environ; 2021 Dec; 801():149797. PubMed ID: 34467917
    [TBL] [Abstract][Full Text] [Related]  

  • 14. An explainable and efficient deep learning framework for video anomaly detection.
    Wu C; Shao S; Tunc C; Satam P; Hariri S
    Cluster Comput; 2022; 25(4):2715-2737. PubMed ID: 34840519
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Explaining multivariate molecular diagnostic tests via Shapley values.
    Roder J; Maguire L; Georgantas R; Roder H
    BMC Med Inform Decis Mak; 2021 Jul; 21(1):211. PubMed ID: 34238309
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Remaining Useful Life Prognosis for Turbofan Engine Using Explainable Deep Neural Networks with Dimensionality Reduction.
    Hong CW; Lee C; Lee K; Ko MS; Kim DE; Hur K
    Sensors (Basel); 2020 Nov; 20(22):. PubMed ID: 33228051
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Comparative analysis of explainable machine learning prediction models for hospital mortality.
    Stenwig E; Salvi G; Rossi PS; Skjærvold NK
    BMC Med Res Methodol; 2022 Feb; 22(1):53. PubMed ID: 35220950
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Unboxing Industry-Standard AI Models for Male Fertility Prediction with SHAP.
    GhoshRoy D; Alvi PA; Santosh KC
    Healthcare (Basel); 2023 Mar; 11(7):. PubMed ID: 37046855
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Feature Attribution Analysis to Quantify the Impact of Oceanographic and Maneuverability Factors on Vessel Shaft Power Using Explainable Tree-Based Model.
    Kim D; Handayani MP; Lee S; Lee J
    Sensors (Basel); 2023 Jan; 23(3):. PubMed ID: 36772108
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Using Explainable Machine Learning to Improve Intensive Care Unit Alarm Systems.
    González-Nóvoa JA; Busto L; Rodríguez-Andina JJ; Fariña J; Segura M; Gómez V; Vila D; Veiga C
    Sensors (Basel); 2021 Oct; 21(21):. PubMed ID: 34770432
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.