These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

82 related articles for article (PubMed ID: 26745599)

  • 1. The Effect of the Raters' Marginal Distributions on Their Matched Agreement: A Rescaling Framework for Interpreting Kappa.
    Karelitz TM; Budescu DV
    Multivariate Behav Res; 2013 Nov; 48(6):923-52. PubMed ID: 26745599
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Summary measures of agreement and association between many raters' ordinal classifications.
    Mitani AA; Freer PE; Nelson KP
    Ann Epidemiol; 2017 Oct; 27(10):677-685.e4. PubMed ID: 29029991
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Clinicians are right not to like Cohen's κ.
    de Vet HC; Mokkink LB; Terwee CB; Hoekstra OS; Knol DL
    BMJ; 2013 Apr; 346():f2125. PubMed ID: 23585065
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Measuring agreement for ordered ratings in 3 x 3 tables.
    Neveu D; Aubas P; Seguret F; Kramar A; Dujols P
    Methods Inf Med; 2006; 45(5):541-7. PubMed ID: 17019509
    [TBL] [Abstract][Full Text] [Related]  

  • 5. [Quality criteria of assessment scales--Cohen's kappa as measure of interrator reliability (1)].
    Mayer H; Nonn C; Osterbrink J; Evers GC
    Pflege; 2004 Feb; 17(1):36-46. PubMed ID: 15040245
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Measures of agreement between many raters for ordinal classifications.
    Nelson KP; Edwards D
    Stat Med; 2015 Oct; 34(23):3116-32. PubMed ID: 26095449
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Delta: a new measure of agreement between two raters.
    Andrés AM; Marzo PF
    Br J Math Stat Psychol; 2004 May; 57(Pt 1):1-19. PubMed ID: 15171798
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Pitfalls in the use of kappa when interpreting agreement between multiple raters in reliability studies.
    O'Leary S; Lund M; Ytre-Hauge TJ; Holm SR; Naess K; Dalland LN; McPhail SM
    Physiotherapy; 2014 Mar; 100(1):27-35. PubMed ID: 24262334
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Random marginal agreement coefficients: rethinking the adjustment for chance when measuring agreement.
    Fay MP
    Biostatistics; 2005 Jan; 6(1):171-80. PubMed ID: 15618535
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Evaluating the effects of rater and subject factors on measures of association.
    Nelson KP; Mitani AA; Edwards D
    Biom J; 2018 May; 60(3):639-656. PubMed ID: 29349801
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A sequential test for assessing observed agreement between raters.
    Bersimis S; Sachlas A; Chakraborti S
    Biom J; 2018 Jan; 60(1):128-145. PubMed ID: 28898444
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Weighted specific-category kappa measure of interobserver agreement.
    Kvålseth TO
    Psychol Rep; 2003 Dec; 93(3 Pt 2):1283-90. PubMed ID: 14765602
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Midwives' visual interpretation of intrapartum cardiotocographs: intra- and inter-observer agreement.
    Devane D; Lalor J
    J Adv Nurs; 2005 Oct; 52(2):133-41. PubMed ID: 16164474
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Interobserver agreement: Cohen's kappa coefficient does not necessarily reflect the percentage of patients with congruent classifications.
    Steinijans VW; Diletti E; Bömches B; Greis C; Solleder P
    Int J Clin Pharmacol Ther; 1997 Mar; 35(3):93-5. PubMed ID: 9088995
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The meaning of kappa: probabilistic concepts of reliability and validity revisited.
    Guggenmoos-Holzmann I
    J Clin Epidemiol; 1996 Jul; 49(7):775-82. PubMed ID: 8691228
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Observer agreement paradoxes in 2x2 tables: comparison of agreement measures.
    Shankar V; Bangdiwala SI
    BMC Med Res Methodol; 2014 Aug; 14():100. PubMed ID: 25168681
    [TBL] [Abstract][Full Text] [Related]  

  • 17. [Analyzing interrater agreement for categorical data using Cohen's kappa and alternative coefficients].
    Wirtz M; Kutschmann M
    Rehabilitation (Stuttg); 2007 Dec; 46(6):370-7. PubMed ID: 18188809
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
    Hong H; Choi Y; Hahn S; Park SK; Park BJ
    Ann Epidemiol; 2014 Sep; 24(9):673-80. PubMed ID: 25088752
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Modelling patterns of agreement and disagreement.
    Agresti A
    Stat Methods Med Res; 1992; 1(2):201-18. PubMed ID: 1341658
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance.
    Gwet KL
    Educ Psychol Meas; 2016 Aug; 76(4):609-637. PubMed ID: 29795880
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 5.