These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Reliability of the classification of proximal femur fractures: Does clinical experience matter?
    Author: Crijns TJ, Janssen SJ, Davis JT, Ring D, Sanchez HB, Science of Variation Group.
    Journal: Injury; 2018 Apr; 49(4):819-823. PubMed ID: 29549969.
    Abstract:
    BACKGROUND: Radiographic fracture classification helps with research on prognosis and treatment. AO/OTA classification into fracture type has shown to be reliable, but further classification of fractures into subgroups reduces the interobserver agreement and takes a considerable amount of practice and experience in order to master. QUESTIONS/PURPOSES: We assessed: (1) differences between more and less experienced trauma surgeons based on hip fractures treated per year, years of experience, and the percentage of their time dedicated to trauma, (2) differences in the interobserver agreement between classification into fracture type, group, and subgroup, and (3) differences in the interobserver agreement when assessing fracture stability compared to classifying fractures into type, group and subgroup. METHODS: This study used the Science of Variation Group to measure factors associated with variation in interobserver agreement on classification of proximal femur fractures according to the AO/OTA classification on radiographs. We selected 30 anteroposterior radiographs from 1061 patients aged 55 years or older with an isolated fracture of the proximal femur, with a spectrum of fracture types proportional to the full database. To measure the interobserver agreement the Fleiss' kappa was determined and bootstrapping (resamples = 1000) was used to calculate the standard error, z statistic, and 95% confidence intervals. We compared the Kappa values of surgeons with more experience to less experienced surgeons. RESULTS: There were no statistically significant differences in the Kappa values on each classification level (type, group, subgroup) between more and less experienced surgeons. When all surgeons were combined into one group, the interobserver reliability was the greatest for classifying the fractures into type (kappa, 0.90; 95% CI, 0.83 to 0.97; p < 0.001), reflecting almost perfect agreement. When comparing the kappa values between classes (type, group, subgroup), we found statistically significant differences between each class. Substantial agreement was found in the clinically relevant groups stable/unstable trochanteric, displaced/non-displaced femoral neck, and femoral head fractures (kappa, 0.60; 95% CI, 0.53 to 0.67, p < 0.001). CONCLUSIONS: This study adds to a growing body of evidence that relatively simple distinctions are more reliable and that this is independent of surgeon experience.
    [Abstract] [Full Text] [Related] [New Search]