BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

237 related articles for article (PubMed ID: 35523917)

  • 21. A critical moment in machine learning in medicine: on reproducible and interpretable learning.
    Ciobanu-Caraus O; Aicher A; Kernbach JM; Regli L; Serra C; Staartjes VE
    Acta Neurochir (Wien); 2024 Jan; 166(1):14. PubMed ID: 38227273
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Machine Learning and Artificial Intelligence in Neurocritical Care: a Specialty-Wide Disruptive Transformation or a Strategy for Success.
    Al-Mufti F; Kim M; Dodson V; Sursal T; Bowers C; Cole C; Scurlock C; Becker C; Gandhi C; Mayer SA
    Curr Neurol Neurosci Rep; 2019 Nov; 19(11):89. PubMed ID: 31720867
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Opening the black box: interpretability of machine learning algorithms in electrocardiography.
    Bodini M; Rivolta MW; Sassi R
    Philos Trans A Math Phys Eng Sci; 2021 Dec; 379(2212):20200253. PubMed ID: 34689625
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Interpretable Artificial Intelligence: Why and When.
    Ghosh A; Kandasamy D
    AJR Am J Roentgenol; 2020 May; 214(5):1137-1138. PubMed ID: 32130042
    [No Abstract]   [Full Text] [Related]  

  • 25. On the importance of interpretable machine learning predictions to inform clinical decision making in oncology.
    Lu SC; Swisher CL; Chung C; Jaffray D; Sidey-Gibbons C
    Front Oncol; 2023; 13():1129380. PubMed ID: 36925929
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Building more accurate decision trees with the additive tree.
    Luna JM; Gennatas ED; Ungar LH; Eaton E; Diffenderfer ES; Jensen ST; Simone CB; Friedman JH; Solberg TD; Valdes G
    Proc Natl Acad Sci U S A; 2019 Oct; 116(40):19887-19893. PubMed ID: 31527280
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy.
    Upadhyaya DP; Prantzalos K; Thyagaraj S; Shafiabadi N; Fernandez-BacaVaca G; Sivagnanam S; Majumdar A; Sahoo SS
    medRxiv; 2023 Oct; ():. PubMed ID: 37425941
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Black Box Prediction Methods in Sports Medicine Deserve a Red Card for Reckless Practice: A Change of Tactics is Needed to Advance Athlete Care.
    Bullock GS; Hughes T; Arundale AH; Ward P; Collins GS; Kluzek S
    Sports Med; 2022 Aug; 52(8):1729-1735. PubMed ID: 35175575
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?
    Yoon CH; Torrance R; Scheinerman N
    J Med Ethics; 2022 Sep; 48(9):581-585. PubMed ID: 34006600
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.
    Pereira S; Meier R; McKinley R; Wiest R; Alves V; Silva CA; Reyes M
    Med Image Anal; 2018 Feb; 44():228-244. PubMed ID: 29289703
    [TBL] [Abstract][Full Text] [Related]  

  • 31. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.
    Tjoa E; Guan C
    IEEE Trans Neural Netw Learn Syst; 2021 Nov; 32(11):4793-4813. PubMed ID: 33079674
    [TBL] [Abstract][Full Text] [Related]  

  • 32. MediBoost: a Patient Stratification Tool for Interpretable Decision Making in the Era of Precision Medicine.
    Valdes G; Luna JM; Eaton E; Simone CB; Ungar LH; Solberg TD
    Sci Rep; 2016 Nov; 6():37854. PubMed ID: 27901055
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Medical Informatics in a Tension Between Black-Box AI and Trust.
    Sariyar M; Holm J
    Stud Health Technol Inform; 2022 Jan; 289():41-44. PubMed ID: 35062087
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Machine Learning in Aging: An Example of Developing Prediction Models for Serious Fall Injury in Older Adults.
    Speiser JL; Callahan KE; Houston DK; Fanning J; Gill TM; Guralnik JM; Newman AB; Pahor M; Rejeski WJ; Miller ME
    J Gerontol A Biol Sci Med Sci; 2021 Mar; 76(4):647-654. PubMed ID: 32498077
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Interpretable Machine Learning Techniques for Causal Inference Using Balancing Scores as Meta-features.
    Nohara Y; Iihara K; Nakashima N
    Annu Int Conf IEEE Eng Med Biol Soc; 2018 Jul; 2018():4042-4045. PubMed ID: 30441244
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Interpretable machine learning for dementia: A systematic review.
    Martin SA; Townend FJ; Barkhof F; Cole JH
    Alzheimers Dement; 2023 May; 19(5):2135-2149. PubMed ID: 36735865
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Cartesian genetic programming for diagnosis of Parkinson disease through handwriting analysis: Performance vs. interpretability issues.
    Parziale A; Senatore R; Della Cioppa A; Marcelli A
    Artif Intell Med; 2021 Jan; 111():101984. PubMed ID: 33461684
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Leveraging interpretable machine learning algorithms to predict postoperative patient outcomes on mobile devices.
    El Hechi MW; Nour Eddine SA; Maurer LR; Kaafarani HMA
    Surgery; 2021 Apr; 169(4):750-754. PubMed ID: 32919784
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Interpretable machine learning methods for predictions in systems biology from omics data.
    Sidak D; Schwarzerová J; Weckwerth W; Waldherr S
    Front Mol Biosci; 2022; 9():926623. PubMed ID: 36387282
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Interpretability analysis for thermal sensation machine learning models: An exploration based on the SHAP approach.
    Yang Y; Yuan Y; Han Z; Liu G
    Indoor Air; 2022 Jan; 32(2):e12984. PubMed ID: 35048421
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 12.