These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Reproducibility of the Banff classification in subclinical kidney transplant rejection.
    Author: Veronese FV, Manfro RC, Roman FR, Edelweiss MI, Rush DN, Dancea S, Goldberg J, Gonçalves LF.
    Journal: Clin Transplant; 2005 Aug; 19(4):518-21. PubMed ID: 16008598.
    Abstract:
    The Banff classification for kidney allograft pathology has proved to be reproducible, but its inter and intraobserver agreement can vary substantially among centres. The aim of this study was to evaluate Banff reproducibility of surveillance renal allograft biopsies among renal pathologists from different transplant centres. This study included 32 renal transplant patients with stable graft function. Biopsies were performed 2 and 12 months post-transplant. Histology was interpreted according to the Banff schema by three renal pathologists, and inter and intraobserver agreement were measured. The best reproducibility was obtained for the presence or absence of acute rejection (AR), with kappa values ranging from moderate (kappa = 0.47; p = 0.006) to good (kappa = 0.72; p = 0.0001). However, the agreement for 'suspicious for AR' category was poor between all observers. For scoring and grading interstitial inflammation and intimal arteritis the agreement were poor and moderate, respectively. Reproducibility for the presence or absence of chronic allograft nephropathy (CAN) was heterogeneous, ranging from poor (kappa = 0.13; p = NS) to moderate (kappa = 0.56; p = 0.007). Scoring chronic changes such as fibrous intimal thickening gave a reasonable interobserver agreement. Intraobserver reproducibility was good for presence or absence of AR, but was poor for the diagnosis of CAN. In conclusion, histologic analysis of stable renal allografts based on Banff criteria showed a good agreement for the diagnosis of AR and a reasonable kappa for CAN, but reproducibility for scoring and grading showed a substantial interobserver variation.
    [Abstract] [Full Text] [Related] [New Search]