These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: LapMentor metrics possess limited construct validity. Author: Andreatta PB, Woodrum DT, Gauger PG, Minter RM. Journal: Simul Healthc; 2008; 3(1):16-25. PubMed ID: 19088638. Abstract: BACKGROUND: Many surgical training programs are introducing virtual-reality laparoscopic simulators into their curriculum. If a surgical simulator will be used to determine when a trainee has reached an "expert" level of performance, its evaluation metrics must accurately reflect varying levels of skill. The ability of a metric to differentiate novice from expert performance is referred to as construct validity. The present study was undertaken to determine whether the LapMentor's metrics demonstrate construct validity. METHODS: Medical students, residents and faculty laparoscopic surgeons (n = 5-14 per group) performed 5 consecutive repetitions of 6 laparoscopic skills tasks: 30 degrees Camera Manipulation, Eye-Hand Coordination, Clipping/Grasping, Cutting, Electrocautery, and Translocation of Objects. The LapMentor measured performance in 4 to 12 parameters per task. Mean performance for each parameter was compared between subject groups for the first and fifth repetitions. Pairwise comparisons among the 3 groups were made by post hoc t-tests with Bonferroni technique. Significance was set at P < 0.05. RESULTS: Of the 6 tasks evaluated, only the Eye-Hand Coordination task (3/12 parameters) and the Clipping and Grasping (1/7 parameters) had expert-level discrimination when performance was compared after completion of 1 repetition. Comparison of the fifth repetition performance (representing the plateau of the learning curves), demonstrated that the parameters Time and Score had expert level discrimination on the Eye-Hand Coordination task, and Time on the Cutting task. The remaining LapMentor tasks evaluated did not exhibit the ability to differentiate level of expertise based on the built-in metrics on either repetition 1 or 5. CONCLUSIONS: The majority of the LapMentor tasks' metrics were unable to differentiate between laparoscopic experts and less skilled subjects. Therefore, performance on those tasks may not accurately reflect a subject's true level of ability. Feedback to the manufacturer about these findings may encourage the development of evaluation parameters with greater sensitivity.[Abstract] [Full Text] [Related] [New Search]