These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Inter-Rater Reliability and Agreement Among Mass-Casualty Incident Algorithms Using a Pediatric Trauma Dataset: A Pilot Study.
    Author: Fisher EH, Claudius I, Kaji AH, Shaban A, McGlynn N, Cicero MX, Santillanes G, Gausche-Hill M, Chang TP, Donofrio-Odmann JJ.
    Journal: Prehosp Disaster Med; 2022 Jun; 37(3):306-313. PubMed ID: 35441588.
    Abstract:
    INTRODUCTION: Many triage algorithms exist for use in mass-casualty incidents (MCIs) involving pediatric patients. Most of these algorithms have not been validated for reliability across users. STUDY OBJECTIVE: Investigators sought to compare inter-rater reliability (IRR) and agreement among five MCI algorithms used in the pediatric population. METHODS: A dataset of 253 pediatric (<14 years of age) trauma activations from a Level I trauma center was used to obtain prehospital information and demographics. Three raters were trained on five MCI triage algorithms: Simple Triage and Rapid Treatment (START) and JumpSTART, as appropriate for age (combined as J-START); Sort Assess Life-Saving Intervention Treatment (SALT); Pediatric Triage Tape (PTT); CareFlight (CF); and Sacco Triage Method (STM). Patient outcomes were collected but not available to raters. Each rater triaged the full set of patients into Green, Yellow, Red, or Black categories with each of the five MCI algorithms. The IRR was reported as weighted kappa scores with 95% confidence intervals (CI). Descriptive statistics were used to describe inter-rater and inter-MCI algorithm agreement. RESULTS: Of the 253 patients, 247 had complete triage assignments among the five algorithms and were included in the study. The IRR was excellent for a majority of the algorithms; however, J-START and CF had the highest reliability with a kappa 0.94 or higher (0.9-1.0, 95% CI for overall weighted kappa). The greatest variability was in SALT among Green and Yellow patients. Overall, J-START and CF had the highest inter-rater and inter-MCI algorithm agreements. CONCLUSION: The IRR was excellent for a majority of the algorithms. The SALT algorithm, which contains subjective components, had the lowest IRR when applied to this dataset of pediatric trauma patients. Both J-START and CF demonstrated the best overall reliability and agreement.
    [Abstract] [Full Text] [Related] [New Search]