129 related articles for article (PubMed ID: 34766799)
1. Weighting schemes and incomplete data: A generalized Bayesian framework for chance-corrected interrater agreement.
van Oest R; Girard JM
Psychol Methods; 2022 Dec; 27(6):1069-1088. PubMed ID: 34766799
[TBL] [Abstract][Full Text] [Related]
2. A new coefficient of interrater agreement: The challenge of highly unequal category proportions.
van Oest R
Psychol Methods; 2019 Aug; 24(4):439-451. PubMed ID: 29723005
[TBL] [Abstract][Full Text] [Related]
3. The Dependence of Chance-Corrected Weighted Agreement Coefficients on the Power Parameter of the Weighting Scheme: Analysis and Measurement.
van Oest R
Psychometrika; 2023 Jun; 88(2):554-579. PubMed ID: 36066789
[TBL] [Abstract][Full Text] [Related]
4. The role of raters threshold in estimating interrater agreement.
Nucci M; Spoto A; Altoè G; Pastore M
Psychol Methods; 2021 Oct; 26(5):622-634. PubMed ID: 34855432
[TBL] [Abstract][Full Text] [Related]
5. Measuring Agreement Using Guessing Models and Knowledge Coefficients.
Moss J
Psychometrika; 2023 Sep; 88(3):1002-1025. PubMed ID: 37291419
[TBL] [Abstract][Full Text] [Related]
6. Interrater reliability estimators tested against true interrater reliabilities.
Zhao X; Feng GC; Ao SH; Liu PL
BMC Med Res Methodol; 2022 Aug; 22(1):232. PubMed ID: 36038846
[TBL] [Abstract][Full Text] [Related]
7. Measures of Agreement with Multiple Raters: Fréchet Variances and Inference.
Moss J
Psychometrika; 2024 Jun; 89(2):517-541. PubMed ID: 38190018
[TBL] [Abstract][Full Text] [Related]
8. Robustness of
Vanacore A; Pellegrino MS
Stat Med; 2022 May; 41(11):1986-2004. PubMed ID: 35124830
[TBL] [Abstract][Full Text] [Related]
9. Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate?
Zapf A; Castell S; Morawietz L; Karch A
BMC Med Res Methodol; 2016 Aug; 16():93. PubMed ID: 27495131
[TBL] [Abstract][Full Text] [Related]
10. A comparison of methods for calculating a stratified kappa.
Barlow W; Lai MY; Azen SP
Stat Med; 1991 Sep; 10(9):1465-72. PubMed ID: 1925174
[TBL] [Abstract][Full Text] [Related]
11. Assessing agreement between multiple raters with missing rating information, applied to breast cancer tumour grading.
Fanshawe TR; Lynch AG; Ellis IO; Green AR; Hanka R
PLoS One; 2008 Aug; 3(8):e2925. PubMed ID: 18698346
[TBL] [Abstract][Full Text] [Related]
12. Interrater agreement statistics with skewed data: evaluation of alternatives to Cohen's kappa.
Xu S; Lorber MF
J Consult Clin Psychol; 2014 Dec; 82(6):1219-27. PubMed ID: 25090041
[TBL] [Abstract][Full Text] [Related]
13. Kappa statistics to measure interrater and intrarater agreement for 1790 cervical biopsy specimens among twelve pathologists: qualitative histopathologic analysis and methodologic issues.
Malpica A; Matisic JP; Niekirk DV; Crum CP; Staerkel GA; Yamal JM; Guillaud MH; Cox DD; Atkinson EN; Adler-Storthz K; Poulin NM; Macaulay CA; Follen M
Gynecol Oncol; 2005 Dec; 99(3 Suppl 1):S38-52. PubMed ID: 16183106
[TBL] [Abstract][Full Text] [Related]
14. Assessing the inter-rater agreement for ordinal data through weighted indexes.
Marasini D; Quatto P; Ripamonti E
Stat Methods Med Res; 2016 Dec; 25(6):2611-2633. PubMed ID: 24740999
[TBL] [Abstract][Full Text] [Related]
15. Interrater reliability: the kappa statistic.
McHugh ML
Biochem Med (Zagreb); 2012; 22(3):276-82. PubMed ID: 23092060
[TBL] [Abstract][Full Text] [Related]
16. Kappa Coefficients for Missing Data.
De Raadt A; Warrens MJ; Bosker RJ; Kiers HAL
Educ Psychol Meas; 2019 Jun; 79(3):558-576. PubMed ID: 31105323
[TBL] [Abstract][Full Text] [Related]
17. Dependence of weighted kappa coefficients on the number of categories.
Brenner H; Kliebsch U
Epidemiology; 1996 Mar; 7(2):199-202. PubMed ID: 8834562
[TBL] [Abstract][Full Text] [Related]
18. Interobserver agreement: Cohen's kappa coefficient does not necessarily reflect the percentage of patients with congruent classifications.
Steinijans VW; Diletti E; Bömches B; Greis C; Solleder P
Int J Clin Pharmacol Ther; 1997 Mar; 35(3):93-5. PubMed ID: 9088995
[TBL] [Abstract][Full Text] [Related]
19. Better to be in agreement than in bad company : A critical analysis of many kappa-like tests.
Silveira PSP; Siqueira JO
Behav Res Methods; 2023 Oct; 55(7):3326-3347. PubMed ID: 36114386
[TBL] [Abstract][Full Text] [Related]
20. Bayesian approaches to the weighted kappa-like inter-rater agreement measures.
Tran QD; Demirhan H; Dolgun A
Stat Methods Med Res; 2021 Oct; 30(10):2329-2351. PubMed ID: 34448633
[TBL] [Abstract][Full Text] [Related]
[Next] [New Search]