These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Intersoftware variability impacts classification of cardiac PET exams.
    Author: Oliveira JB, Sen YM, Wechalekar K.
    Journal: J Nucl Cardiol; 2019 Dec; 26(6):2007-2012. PubMed ID: 30238299.
    Abstract:
    BACKGROUND: Myocardial perfusion imaging (MPI) with 82Rb PET/CT is increasingly utilized in the evaluation of coronary artery disease with high diagnostic accuracy. Various softwares for data processing have been developed over the years with conflicting data regarding their reproducibility. In this study, we compared the quantitative results of myocardial perfusion and exam classification from three different softwares. METHODS: Data from consecutive patients who have undergone rest/stress 82Rb PET/CT MPI at the Royal Brompton & Harefield Trust, London, were analyzed. All data were processed using the Corridor 4DM (Invia, Ann Arbor, Michigan, USA), QPET (Cedars-Sinai, Los Angeles, California, USA), and SyngoMBF (Siemens Healthineers, Erlangen, Germany). The software packages addressed Lortie tracer kinetic model and region of interest (ROI) extraction correction option. STATISTICS: A repeated-measures ANOVA with a Greenhouse-Geisser correction was performed with post hoc tests using Bonferroni correction. For intersoftware variability, Pearson correlation and intraclass correlation coefficients (ICC) were calculated. Bland-Altman assessed limit of agreement. Cohen's Kappa assessed agreement in the classification of exams as normal or abnormal using an MFR cut-off value of 2.0. A P value of less than 0.05 was considered statistically significant. RESULTS: Data from 55 patients were analyzed. The mean values of myocardial blood flow (MBF) and myocardial perfusion reserve (MFR) were statistically significantly different among the softwares (P < 0.05). Corridor4DM had considerably lower values of MFR and classified a more substantial number of exams as abnormal (MFR: 2.21 ± 0.7, 2.4 ± 0.8, and 1.98 ± 0.8; and 18, 15, and 31 exams were abnormal for Syngo, QPET, and Corridor4DM, respectively). Accordingly, kappa agreement was moderate for Syngo vs QPET (k > 0.5), but minimal for Corridor4DM in comparison to its pairs (k < 0.4). CONCLUSION: Users should be cautious when using different software interchangeably as systematic differences amongst them may introduce more extensive quantitative variation which could be clinically significant.
    [Abstract] [Full Text] [Related] [New Search]