These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

117 related articles for article (PubMed ID: 38437598)

  • 1. More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.
    Ling S; Zhang Y; Du N
    Hum Factors; 2024 Dec; 66(12):2606-2620. PubMed ID: 38437598
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming.
    Du N; Huang KY; Yang XJ
    Hum Factors; 2020 Sep; 62(6):987-1001. PubMed ID: 31348863
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.
    Mercado JE; Rupp MA; Chen JY; Barnes MJ; Barber D; Procci K
    Hum Factors; 2016 May; 58(3):401-15. PubMed ID: 26867556
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Automation trust and attention allocation in multitasking workspace.
    Karpinsky ND; Chancey ET; Palmer DB; Yamani Y
    Appl Ergon; 2018 Jul; 70():194-201. PubMed ID: 29866311
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions.
    Karran AJ; Demazure T; Hudon A; Senecal S; Léger PM
    Front Neurosci; 2022; 16():883385. PubMed ID: 35812230
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Human Performance Benefits of The Automation Transparency Design Principle : Validation and Variation.
    Skraaning G; Jamieson GA
    Hum Factors; 2021 May; 63(3):379-401. PubMed ID: 31834815
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation.
    Foroughi CK; Devlin S; Pak R; Brown NL; Sibley C; Coyne JT
    Hum Factors; 2023 Jun; 65(4):546-561. PubMed ID: 34348511
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Transparency improves the accuracy of automation use, but automation confidence information does not.
    Tatasciore M; Strickland L; Loft S
    Cogn Res Princ Implic; 2024 Oct; 9(1):67. PubMed ID: 39379606
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Enhancing safety in conditionally automated driving: Can more takeover request visual information make a difference in hazard scenarios with varied hazard visibility?
    Huang WC; Fan LH; Han ZJ; Niu YF
    Accid Anal Prev; 2024 Sep; 205():107687. PubMed ID: 38943983
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Trust and Distrust of Automated Parking in a Tesla Model X.
    Tenhundfeld NL; de Visser EJ; Ries AJ; Finomore VS; Tossell CC
    Hum Factors; 2020 Mar; 62(2):194-210. PubMed ID: 31419163
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Trust with increasing and decreasing reliability.
    Rittenberg BSP; Holland CW; Barnhart GE; Gaudreau SM; Neyedli HF
    Hum Factors; 2024 Dec; 66(12):2569-2589. PubMed ID: 38445652
    [TBL] [Abstract][Full Text] [Related]  

  • 12. How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.
    Rosenbacke R; Melhus Å; McKee M; Stuckler D
    JMIR AI; 2024 Oct; 3():e53207. PubMed ID: 39476365
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The reliability and transparency bases of trust in human-swarm interaction: principles and implications.
    Hussein A; Elsawah S; Abbass HA
    Ergonomics; 2020 Sep; 63(9):1116-1132. PubMed ID: 32370651
    [TBL] [Abstract][Full Text] [Related]  

  • 14. How transparency modulates trust in artificial intelligence.
    Zerilli J; Bhatt U; Weller A
    Patterns (N Y); 2022 Apr; 3(4):100455. PubMed ID: 35465233
    [TBL] [Abstract][Full Text] [Related]  

  • 15. An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.
    Fernandes GJ; Choi A; Schauer JM; Pfammatter AF; Spring BJ; Darwiche A; Alshurafa NI
    J Med Internet Res; 2023 Sep; 25():e42047. PubMed ID: 37672333
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Providing different levels of accuracy about the reliability of automation to a human operator: impact on human performance.
    Avril E
    Ergonomics; 2023 Feb; 66(2):217-226. PubMed ID: 35451925
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Dancing With Algorithms: Interaction Creates Greater Preference and Trust in Machine-Learned Behavior.
    Gutzwiller RS; Reeder J
    Hum Factors; 2021 Aug; 63(5):854-867. PubMed ID: 32048883
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Not all trust is created equal: dispositional and history-based trust in human-automation interactions.
    Merritt SM; Ilgen DR
    Hum Factors; 2008 Apr; 50(2):194-210. PubMed ID: 18516832
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).
    Carter OBJ; Loft S; Visser TAW
    Hum Factors; 2024 Nov; 66(11):2485-2502. PubMed ID: 38041565
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Automation transparency: implications of uncertainty communication for human-automation interaction and interfaces.
    Kunze A; Summerskill SJ; Marshall R; Filtness AJ
    Ergonomics; 2019 Mar; 62(3):345-360. PubMed ID: 30501566
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.