These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: A Question of Scale? Generalizability of the Ottawa and Chen Scales to Render Entrustment Decisions for the Core EPAs in the Workplace. Author: Ryan MS, Khamishon R, Richards A, Perera R, Garber A, Santen SA. Journal: Acad Med; 2022 Apr 01; 97(4):552-561. PubMed ID: 34074896. Abstract: PURPOSE: Assessments of the Core Entrustable Professional Activities (Core EPAs) are based on observations of supervisors throughout a medical student's progression toward entrustment. The purpose of this study was to compare generalizability of scores from 2 entrustment scales: the Ottawa Surgical Competency Operating Room Evaluation (Ottawa) scale and an undergraduate medical education supervisory scale proposed by Chen and colleagues (Chen). A secondary aim was to determine the impact of frequent assessors on generalizability of the data. METHOD: For academic year 2019-2020, the Virginia Commonwealth University School of Medicine modified a previously described workplace-based assessment (WBA) system developed to provide feedback for the Core EPAs across clerkships. The WBA scored students' performance using both Ottawa and Chen scales. Generalizability (G) and decision (D) studies were performed using an unbalanced random-effects model to determine the reliability of each scale. Secondary G- and D-studies explored whether faculty who rated more than 5 students demonstrated better reliability. The Phi-coefficient was used to estimate reliability; a cutoff of at least 0.70 was used to conduct D-studies. RESULTS: Using the Ottawa scale, variability attributable to the student ranged from 0.8% to 6.5%. For the Chen scale, student variability ranged from 1.8% to 7.1%. This indicates the majority of variation was due to the rater (42.8%-61.3%) and other unexplained factors. Between 28 and 127 assessments were required to obtain a Phi-coefficient of 0.70. For 2 EPAs, using faculty who frequently assessed the EPA improved generalizability, requiring only 5 and 13 assessments for the Chen scale. CONCLUSIONS: Both scales performed poorly in terms of learner-attributed variance, with some improvement in 2 EPAs when considering only frequent assessors using the Chen scale. Based on these findings in conjunction with prior evidence, the authors provide a root cause analysis highlighting challenges with WBAs for Core EPAs.[Abstract] [Full Text] [Related] [New Search]