These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Reliability and reproducibility of the new AO/OTA 2018 classification system for proximal humeral fractures: a comparison of three different classification systems. Author: Marongiu G, Leinardi L, Congia S, Frigau L, Mola F, Capone A. Journal: J Orthop Traumatol; 2020 Mar 12; 21(1):4. PubMed ID: 32166457. Abstract: BACKGROUND: The classification systems for proximal humeral fractures routinely used in clinical practice include the Neer and Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 2007 systems. Currently used systems have low inter- and intraobserver reliability. In 2018, AO/OTA introduced a new classification system with the aim of simplifying the coding process, in which the Neer four-part classification was integrated into the fracture description. The aim of the present work is to assess the inter- and intraobserver agreement of the new AO/OTA 2018 compared with the Neer and AO/OTA 2007 classifications. MATERIALS AND METHODS: A total of 116 radiographs of consecutive patients with proximal humeral fracture were selected and classified by three observers with different levels of experience. All three observers independently reviewed and classified the images according to the Neer, AO/OTA 2007, and new AO/OTA 2018 systems. To determine the intraobserver agreement, the observers reviewed the same set of radiographs after an interval of 8 weeks. The inter- and intraobserver agreement were determined through Cohen's kappa coefficient analysis. RESULTS: The new AO/OTA 2018 classification showed substantial mean inter- (k = 0.67) and intraobserver (k = 0.75) agreement. These results are similar to the reliability observed for the Neer classification (interobserver, k = 0.67; intraobserver, k = 0.85) but better than those found for the AO/OTA 2007 system, which showed only moderate inter- (k = 0.57) and intraobserver (k = 0.58) agreement. The two more experienced observers showed better overall agreement, but no statistically significant difference was found. No differences were found between surgical experience and agreement regarding specific fracture types or groups. CONCLUSIONS: The results showed that the Neer system still represents the more reliable and reproducible classification. However, the new AO/OTA 2018 classification improved the agreement among observers compared with the AO/OTA 2007 system, while still maintaining substantial descriptive power and simplifying the coding process. The universal modifiers and qualifications, despite their possible complexity, allowed a more comprehensive fracture definition without negatively affecting the reliability or reproducibility of the classification system. LEVEL OF EVIDENCE: Level III, diagnostic studies.[Abstract] [Full Text] [Related] [New Search]