These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

140 related articles for article (PubMed ID: 8310189)

  • 1. How reliable are chance-corrected measures of agreement?
    Guggenmoos-Holzmann I
    Stat Med; 1993 Dec; 12(23):2191-205. PubMed ID: 8310189
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Kappa-like indices of observer agreement viewed from a latent class perspective.
    Guggenmoos-Holzmann I; Vonk R
    Stat Med; 1998 Apr; 17(8):797-812. PubMed ID: 9595612
    [TBL] [Abstract][Full Text] [Related]  

  • 3. [Quality criteria of assessment scales--Cohen's kappa as measure of interrator reliability (1)].
    Mayer H; Nonn C; Osterbrink J; Evers GC
    Pflege; 2004 Feb; 17(1):36-46. PubMed ID: 15040245
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The meaning of kappa: probabilistic concepts of reliability and validity revisited.
    Guggenmoos-Holzmann I
    J Clin Epidemiol; 1996 Jul; 49(7):775-82. PubMed ID: 8691228
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The Kappa Paradox Explained.
    Derksen BM; Bruinsma W; Goslings JC; Schep NWL
    J Hand Surg Am; 2024 May; 49(5):482-485. PubMed ID: 38372689
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Interobserver agreement: Cohen's kappa coefficient does not necessarily reflect the percentage of patients with congruent classifications.
    Steinijans VW; Diletti E; Bömches B; Greis C; Solleder P
    Int J Clin Pharmacol Ther; 1997 Mar; 35(3):93-5. PubMed ID: 9088995
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa.
    Warrens MJ
    Br J Math Stat Psychol; 2011 May; 64(Pt 2):355-65. PubMed ID: 21492138
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Interpreting kappa values for two-observer nursing diagnosis data.
    Banerjee M; Fielding J
    Res Nurs Health; 1997 Oct; 20(5):465-70. PubMed ID: 9334800
    [TBL] [Abstract][Full Text] [Related]  

  • 9. [Analyzing interrater agreement for categorical data using Cohen's kappa and alternative coefficients].
    Wirtz M; Kutschmann M
    Rehabilitation (Stuttg); 2007 Dec; 46(6):370-7. PubMed ID: 18188809
    [TBL] [Abstract][Full Text] [Related]  

  • 10. The prediction of pouch of Douglas obliteration using offline analysis of the transvaginal ultrasound 'sliding sign' technique: inter- and intra-observer reproducibility.
    Reid S; Lu C; Casikar I; Mein B; Magotti R; Ludlow J; Benzie R; Condous G
    Hum Reprod; 2013 May; 28(5):1237-46. PubMed ID: 23482338
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Measures of Agreement with Multiple Raters: Fréchet Variances and Inference.
    Moss J
    Psychometrika; 2024 Jun; 89(2):517-541. PubMed ID: 38190018
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Measurement of observer agreement.
    Kundel HL; Polansky M
    Radiology; 2003 Aug; 228(2):303-8. PubMed ID: 12819342
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The Simpson's paradox unraveled.
    Hernán MA; Clayton D; Keiding N
    Int J Epidemiol; 2011 Jun; 40(3):780-5. PubMed ID: 21454324
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Chance-corrected measures of reliability and validity in K x K tables.
    Andrés AM; Marzo PF
    Stat Methods Med Res; 2005 Oct; 14(5):473-92. PubMed ID: 16248349
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Homogeneity of kappa statistics in multiple samples.
    Reed JF
    Comput Methods Programs Biomed; 2000 Aug; 63(1):43-6. PubMed ID: 10927153
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Pitfalls in the use of kappa when interpreting agreement between multiple raters in reliability studies.
    O'Leary S; Lund M; Ytre-Hauge TJ; Holm SR; Naess K; Dalland LN; McPhail SM
    Physiotherapy; 2014 Mar; 100(1):27-35. PubMed ID: 24262334
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Clinicians are right not to like Cohen's κ.
    de Vet HC; Mokkink LB; Terwee CB; Hoekstra OS; Knol DL
    BMJ; 2013 Apr; 346():f2125. PubMed ID: 23585065
    [TBL] [Abstract][Full Text] [Related]  

  • 18. The dependence of Cohen's kappa on the prevalence does not matter.
    Vach W
    J Clin Epidemiol; 2005 Jul; 58(7):655-61. PubMed ID: 15939215
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Relationships between statistical measures of agreement: sensitivity, specificity and kappa.
    Feuerman M; Miller AR
    J Eval Clin Pract; 2008 Oct; 14(5):930-3. PubMed ID: 19018927
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A paired kappa to compare binary ratings across two medical tests.
    Nelson KP; Edwards D
    Stat Med; 2019 Jul; 38(17):3272-3287. PubMed ID: 31099902
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.