These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
4. Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic). McGinn T; Wyer PC; Newman TB; Keitz S; Leipzig R; For GG; CMAJ; 2004 Nov; 171(11):1369-73. PubMed ID: 15557592 [No Abstract] [Full Text] [Related]
5. Issues in the use of kappa. Ker M Invest Radiol; 1991 Jan; 26(1):78-83. PubMed ID: 2022457 [TBL] [Abstract][Full Text] [Related]
6. Simultaneous estimation of intrarater and interrater agreement for multiple raters under order restrictions for a binary trait. Lester Kirchner H; Lemke JH Stat Med; 2002 Jun; 21(12):1761-72. PubMed ID: 12111910 [TBL] [Abstract][Full Text] [Related]
7. [The concordance test between two experts]. Dupuy A; Guillaume JC Ann Dermatol Venereol; 2003 May; 130(5):570. PubMed ID: 12843842 [No Abstract] [Full Text] [Related]
11. Inter-rater evaluation of a clinical scoring system in children with asthma. Angelilli ML; Thomas R Ann Allergy Asthma Immunol; 2002 Feb; 88(2):209-14. PubMed ID: 11868927 [TBL] [Abstract][Full Text] [Related]
12. Inter-observer agreement in audit of quality of radiology requests and reports. Stavem K; Foss T; Botnmark O; Andersen OK; Erikssen J Clin Radiol; 2004 Nov; 59(11):1018-24. PubMed ID: 15488851 [TBL] [Abstract][Full Text] [Related]
13. Principles: the need for better experimental design. Festing MF Trends Pharmacol Sci; 2003 Jul; 24(7):341-5. PubMed ID: 12871666 [No Abstract] [Full Text] [Related]
14. Statistics for diagnostic procedures. III. Philosophic and research design considerations. Scott JA; Phillips WC; Blasczcynski GM AJR Am J Roentgenol; 1983 Aug; 141(2):409-11. PubMed ID: 6603147 [TBL] [Abstract][Full Text] [Related]
15. How to develop and critique a research protocol. Karlik SJ AJR Am J Roentgenol; 2001 Jun; 176(6):1375-80. PubMed ID: 11373195 [No Abstract] [Full Text] [Related]
16. [Determining the quality of rater judgements using intraclass correlation, and enhancing rater judgements]. Wirtz M Rehabilitation (Stuttg); 2004 Dec; 43(6):384-9. PubMed ID: 15565540 [TBL] [Abstract][Full Text] [Related]
17. When coders are reliable: the application of three measures to assess inter-rater reliability/agreement with doctor-patient communication data coded with the VR-CoDES. Fletcher I; Mazzi M; Nuebling M Patient Educ Couns; 2011 Mar; 82(3):341-5. PubMed ID: 21316896 [TBL] [Abstract][Full Text] [Related]
18. Rater biases in genetically informative research designs: comment on Bartels, Boomsma, Hudziak, van Beijsterveldt, and van den Oord (2007). Hoyt WT Psychol Methods; 2007 Dec; 12(4):467-75. PubMed ID: 18179356 [TBL] [Abstract][Full Text] [Related]
19. Observer variation in the diagnosis of thyroid disorders. Criteria for and impact on diagnostic decision-making. Jarløv AE Dan Med Bull; 2000 Nov; 47(5):328-39. PubMed ID: 11155660 [TBL] [Abstract][Full Text] [Related]
20. Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed. Slaug B; Schilling O; Helle T; Iwarsson S; Carlsson G; Brandt Å J Clin Epidemiol; 2012 Sep; 65(9):1016-25. PubMed ID: 22742912 [TBL] [Abstract][Full Text] [Related] [Next] [New Search]