These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

147 related articles for article (PubMed ID: 1341658)

  • 21. Log-linear non-uniform association models for agreement between two ratings on an ordinal scale.
    Valet F; Guinot C; Mary JY
    Stat Med; 2007 Feb; 26(3):647-62. PubMed ID: 16538701
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Log-Linear Modeling of Agreement among Expert Exposure Assessors.
    Hunt PR; Friesen MC; Sama S; Ryan L; Milton D
    Ann Occup Hyg; 2015 Jul; 59(6):764-74. PubMed ID: 25748517
    [TBL] [Abstract][Full Text] [Related]  

  • 23. A new permutation-based method for assessing agreement between two observers making replicated binary readings.
    Pan Y; Haber M; Barnhart HX
    Stat Med; 2011 Apr; 30(8):839-53. PubMed ID: 21432878
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Interobserver agreement: Cohen's kappa coefficient does not necessarily reflect the percentage of patients with congruent classifications.
    Steinijans VW; Diletti E; Bömches B; Greis C; Solleder P
    Int J Clin Pharmacol Ther; 1997 Mar; 35(3):93-5. PubMed ID: 9088995
    [TBL] [Abstract][Full Text] [Related]  

  • 25. Assessing observer agreement in studies involving replicated binary observations.
    Haber M; Gao J; Barnhart HX
    J Biopharm Stat; 2007; 17(4):757-66. PubMed ID: 17613652
    [TBL] [Abstract][Full Text] [Related]  

  • 26. How reliable are chance-corrected measures of agreement?
    Guggenmoos-Holzmann I
    Stat Med; 1993 Dec; 12(23):2191-205. PubMed ID: 8310189
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Random marginal agreement coefficients: rethinking the adjustment for chance when measuring agreement.
    Fay MP
    Biostatistics; 2005 Jan; 6(1):171-80. PubMed ID: 15618535
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Beyond kappa: an informational index for diagnostic agreement in dichotomous and multivalue ordered-categorical ratings.
    Casagrande A; Fabris F; Girometti R
    Med Biol Eng Comput; 2020 Dec; 58(12):3089-3099. PubMed ID: 33145661
    [TBL] [Abstract][Full Text] [Related]  

  • 29. [Evaluation of the structures of agreement and disagreement in reliability studies].
    da Silva EF; Pereira MG
    Rev Saude Publica; 1998 Aug; 32(4):383-93. PubMed ID: 9876431
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Reproducibility of the implant crown aesthetic index--rating aesthetics of single-implant crowns and adjacent soft tissues with regard to observer dental specialization.
    Gehrke P; Degidi M; Lulay-Saad Z; Dhom G
    Clin Implant Dent Relat Res; 2009 Sep; 11(3):201-13. PubMed ID: 18657148
    [TBL] [Abstract][Full Text] [Related]  

  • 31. The meaning of kappa: probabilistic concepts of reliability and validity revisited.
    Guggenmoos-Holzmann I
    J Clin Epidemiol; 1996 Jul; 49(7):775-82. PubMed ID: 8691228
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Assessing observer agreement when describing and classifying functioning with the International Classification of Functioning, Disability and Health.
    Grill E; Mansmann U; Cieza A; Stucki G
    J Rehabil Med; 2007 Jan; 39(1):71-6. PubMed ID: 17225041
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Measurement of interrater agreement with adjustment for covariates.
    Barlow W
    Biometrics; 1996 Jun; 52(2):695-702. PubMed ID: 10766505
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Weighted specific-category kappa measure of interobserver agreement.
    Kvålseth TO
    Psychol Rep; 2003 Dec; 93(3 Pt 2):1283-90. PubMed ID: 14765602
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Chance-corrected measures of reliability and validity in K x K tables.
    Andrés AM; Marzo PF
    Stat Methods Med Res; 2005 Oct; 14(5):473-92. PubMed ID: 16248349
    [TBL] [Abstract][Full Text] [Related]  

  • 36. The design of observer agreement studies with binary assessments.
    Freedman LS; Parmar MK; Baker SG
    Stat Med; 1993 Jan; 12(2):165-79. PubMed ID: 8446811
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Observer agreement paradoxes in 2x2 tables: comparison of agreement measures.
    Shankar V; Bangdiwala SI
    BMC Med Res Methodol; 2014 Aug; 14():100. PubMed ID: 25168681
    [TBL] [Abstract][Full Text] [Related]  

  • 38. The Effect of the Raters' Marginal Distributions on Their Matched Agreement: A Rescaling Framework for Interpreting Kappa.
    Karelitz TM; Budescu DV
    Multivariate Behav Res; 2013 Nov; 48(6):923-52. PubMed ID: 26745599
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Pitfalls in the use of kappa when interpreting agreement between multiple raters in reliability studies.
    O'Leary S; Lund M; Ytre-Hauge TJ; Holm SR; Naess K; Dalland LN; McPhail SM
    Physiotherapy; 2014 Mar; 100(1):27-35. PubMed ID: 24262334
    [TBL] [Abstract][Full Text] [Related]  

  • 40. The value of Bayes theorem in the interpretation of subjective diagnostic findings: what can we learn from agreement studies?
    Sadatsafavi M; Moayyeri A; Bahrami H; Soltani A
    Med Decis Making; 2007; 27(6):735-43. PubMed ID: 17873264
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.