These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

90 related articles for article (PubMed ID: 17019509)

  • 21. Kappa statistic to measure agreement beyond chance in free-response assessments.
    Carpentier M; Combescure C; Merlini L; Perneger TV
    BMC Med Res Methodol; 2017 Apr; 17(1):62. PubMed ID: 28420347
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Behavior and interpretation of the kappa statistic: resolution of the two paradoxes.
    Lantz CA; Nebenzahl E
    J Clin Epidemiol; 1996 Apr; 49(4):431-4. PubMed ID: 8621993
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Multiple-rater kappas for binary data: Models and interpretation.
    Stoyan D; Pommerening A; Hummel M; Kopp-Schneider A
    Biom J; 2018 Mar; 60(2):381-394. PubMed ID: 29280179
    [TBL] [Abstract][Full Text] [Related]  

  • 24. The analysis of 2 x 1 and 2 x 2 contingency tables: an historical review.
    Richardson JT
    Stat Methods Med Res; 1994; 3(2):107-33. PubMed ID: 7952428
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Using prevalence indices to aid interpretation and comparison of agreement ratings between two or more observers.
    Burn CC; Weir AA
    Vet J; 2011 May; 188(2):166-70. PubMed ID: 20570535
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Kappa statistic for clustered matched-pair data.
    Yang Z; Zhou M
    Stat Med; 2014 Jul; 33(15):2612-33. PubMed ID: 24532251
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Weighted kappa for multiple raters.
    Berry KJ; Johnston JE; Mielke PW
    Percept Mot Skills; 2008 Dec; 107(3):837-48. PubMed ID: 19235413
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Reliability of the modified Rankin Scale across multiple raters: benefits of a structured interview.
    Wilson JT; Hareendran A; Hendry A; Potter J; Bone I; Muir KW
    Stroke; 2005 Apr; 36(4):777-81. PubMed ID: 15718510
    [TBL] [Abstract][Full Text] [Related]  

  • 29. High agreement but low kappa: I. The problems of two paradoxes.
    Feinstein AR; Cicchetti DV
    J Clin Epidemiol; 1990; 43(6):543-9. PubMed ID: 2348207
    [TBL] [Abstract][Full Text] [Related]  

  • 30. A note on the kappa statistic for clustered dichotomous data.
    Zhou M; Yang Z
    Stat Med; 2014 Jun; 33(14):2425-48. PubMed ID: 24488927
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Statistical Characteristics of the Weighted Inter-Rater Reliability Index for Clinically Validating Nursing Diagnoses.
    de Oliveira Lopes MV; da Silva VM; de Araujo TL; da Silva Filho JV
    Int J Nurs Knowl; 2015 Oct; 26(4):150-5. PubMed ID: 25098745
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Evidence-based medicine (EBM) in practice: agreement between observers rating esophageal varices: how to cope with chance?
    Sierra F; Cárdenas A
    Am J Gastroenterol; 2007 Nov; 102(11):2363-6. PubMed ID: 17958753
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Sequential rank agreement methods for comparison of ranked lists.
    Ekstrøm CT; Gerds TA; Jensen AK
    Biostatistics; 2019 Oct; 20(4):582-598. PubMed ID: 29868883
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Measures of agreement between many raters for ordinal classifications.
    Nelson KP; Edwards D
    Stat Med; 2015 Oct; 34(23):3116-32. PubMed ID: 26095449
    [TBL] [Abstract][Full Text] [Related]  

  • 35. The exact variance of weighted kappa with multiple raters.
    Mielke PW; Berry KJ; Johnston JE
    Psychol Rep; 2007 Oct; 101(2):655-60. PubMed ID: 18175509
    [TBL] [Abstract][Full Text] [Related]  

  • 36. A Ratio Test of Interrater Agreement With High Specificity.
    Cousineau D; Laurencelle L
    Educ Psychol Meas; 2015 Dec; 75(6):979-1001. PubMed ID: 29795849
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Random marginal agreement coefficients: rethinking the adjustment for chance when measuring agreement.
    Fay MP
    Biostatistics; 2005 Jan; 6(1):171-80. PubMed ID: 15618535
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Inter-rater agreement on assessment of outcome within a trauma registry.
    Ekegren CL; Hart MJ; Brown A; Gabbe BJ
    Injury; 2016 Jan; 47(1):130-4. PubMed ID: 26304002
    [TBL] [Abstract][Full Text] [Related]  

  • 39. The kappa statistic was representative of empirically observed inter-rater agreement for physical findings.
    Gorelick MH; Yen K
    J Clin Epidemiol; 2006 Aug; 59(8):859-61. PubMed ID: 16828681
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Measuring interobserver variation in a pathology EQA scheme using weighted κ for multiple readers.
    Wright KC; Melia J; Moss S; Berney DM; Coleman D; Harnden P
    J Clin Pathol; 2011 Dec; 64(12):1128-31. PubMed ID: 21836039
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 5.