These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: A comparison of the American Board of Anesthesiology's in-person and virtual objective structured clinical examinations.
    Author: Sun H, Deiner SG, Harman AE, Isaak RS, Keegan MT.
    Journal: J Clin Anesth; 2023 Dec; 91():111258. PubMed ID: 37734196.
    Abstract:
    BACKGROUND: The American Board of Anesthesiology's Objective Structured Clinical Examination (OSCE), as a component of its initial certification process, had been administered in-person in a dedicated assessment center since its launch in 2018 until March 2020. Due to the COVID-19 pandemic, a virtual format of the exam was piloted in December 2020 and was administered in 2021. This study aimed to compare candidate performance, examiner grading severity, and scenario difficulty between these two formats of the OSCE. METHODS: The Many-Facet Rasch Model was utilized to estimate candidate performance, examiner grading severity, and scenario difficulty for the in-person and virtual OSCEs separately. The virtual OSCE was equated to the in-person OSCE by common examiners and common scenarios. Independent-samples t-test was used to compare candidate performance, and partially overlapping samples t-tests were applied to compare examiner grading severity and scenario difficulty between the in-person and virtual OSCEs. RESULTS: The in-person (n = 3235) and virtual (n = 2934) first-time candidates were comparable in age, sex, race/ethnicity, and whether U.S. medical school graduates. The virtual scenarios (n = 35, mean [0.21] ± SD [0.38] in logits) were more difficult than the in-person scenarios (n = 93, 0.00 ± 0.69, Welch's partially overlapping samples t-test, p = 0.01); there were no statistically significant differences in examiner severity (n = 390, -0.01 ± 0.82 vs. n = 304, -0.02 ± 0.93, Welch's partially overlapping samples t-test, p = 0.81) or candidate performance (2.19 ± 0.93 vs. 2.18 ± 0.92, Welch's independent samples t-test, p = 0.83) between the in-person and virtual OSCEs. CONCLUSIONS: Our retrospective analyses of first-time OSCEs found comparable candidate performance and examiner grading severity between the in-person and virtual formats, despite the virtual scenarios being more difficult than the in-person scenarios. These results provided assurance that the virtual OSCE functioned reasonably well in a high-stakes setting.
    [Abstract] [Full Text] [Related] [New Search]