These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

183 related articles for article (PubMed ID: 26095449)

  • 41. Bayesian approaches to the weighted kappa-like inter-rater agreement measures.
    Tran QD; Demirhan H; Dolgun A
    Stat Methods Med Res; 2021 Oct; 30(10):2329-2351. PubMed ID: 34448633
    [TBL] [Abstract][Full Text] [Related]  

  • 42. Pitfalls in the use of kappa when interpreting agreement between multiple raters in reliability studies.
    O'Leary S; Lund M; Ytre-Hauge TJ; Holm SR; Naess K; Dalland LN; McPhail SM
    Physiotherapy; 2014 Mar; 100(1):27-35. PubMed ID: 24262334
    [TBL] [Abstract][Full Text] [Related]  

  • 43. Persistent inter-observer variability of breast density assessment using BI-RADS® 5th edition guidelines.
    Portnow LH; Georgian-Smith D; Haider I; Barrios M; Bay CP; Nelson KP; Raza S
    Clin Imaging; 2022 Mar; 83():21-27. PubMed ID: 34952487
    [TBL] [Abstract][Full Text] [Related]  

  • 44. Breast and cervical cancer screening: clinicians' views on health plan guidelines and implementation efforts.
    Zapka JG; Puleo E; Taplin S; Solberg LI; Mouchawar J; Somkin C; Geiger AM; Ulcickas Yood M
    J Natl Cancer Inst Monogr; 2005; (35):46-54. PubMed ID: 16287885
    [TBL] [Abstract][Full Text] [Related]  

  • 45. Evaluating inter-rater reliability in the context of "Sysmex UN2000 detection of protein/creatinine ratio and of renal tubular epithelial cells can be used for screening lupus nephritis": a statistical examination.
    Li M; Gao Q; Yang J; Yu T
    BMC Nephrol; 2024 Mar; 25(1):94. PubMed ID: 38481181
    [TBL] [Abstract][Full Text] [Related]  

  • 46. Log-linear modelling of pairwise interobserver agreement on a categorical scale.
    Becker MP; Agresti A
    Stat Med; 1992 Jan; 11(1):101-14. PubMed ID: 1557566
    [TBL] [Abstract][Full Text] [Related]  

  • 47. [Quality criteria of assessment scales--Cohen's kappa as measure of interrator reliability (1)].
    Mayer H; Nonn C; Osterbrink J; Evers GC
    Pflege; 2004 Feb; 17(1):36-46. PubMed ID: 15040245
    [TBL] [Abstract][Full Text] [Related]  

  • 48. Correcting for rater bias in scores on a continuous scale, with application to breast density.
    Sperrin M; Bardwell L; Sergeant JC; Astley S; Buchan I
    Stat Med; 2013 Nov; 32(26):4666-78. PubMed ID: 23674384
    [TBL] [Abstract][Full Text] [Related]  

  • 49. Detection of grey zones in inter-rater agreement studies.
    Demirhan H; Yilmaz AE
    BMC Med Res Methodol; 2023 Jan; 23(1):3. PubMed ID: 36604617
    [TBL] [Abstract][Full Text] [Related]  

  • 50. The role of raters threshold in estimating interrater agreement.
    Nucci M; Spoto A; Altoè G; Pastore M
    Psychol Methods; 2021 Oct; 26(5):622-634. PubMed ID: 34855432
    [TBL] [Abstract][Full Text] [Related]  

  • 51. Inter- and intra-observer agreement in the assessment of the cervical transformation zone (TZ) by visual inspection with acetic acid (VIA) and its implications for a screen and treat approach: a reliability study.
    Benkortbi K; Catarino R; Wisniak A; Kenfack B; Tincho Foguem E; Venegas G; Mulindi M; Horo A; Jeronimo J; Vassilakos P; Petignat P
    BMC Womens Health; 2023 Jan; 23(1):27. PubMed ID: 36658551
    [TBL] [Abstract][Full Text] [Related]  

  • 52. Comparison of the validity and reliability of two image classification systems for the assessment of mammogram quality.
    Moreira C; Svoboda K; Poulos A; Taylor R; Page A; Rickard M
    J Med Screen; 2005; 12(1):38-42. PubMed ID: 15814018
    [TBL] [Abstract][Full Text] [Related]  

  • 53. Large-Sample Variance of Fleiss Generalized Kappa.
    Gwet KL
    Educ Psychol Meas; 2021 Aug; 81(4):781-790. PubMed ID: 34267400
    [TBL] [Abstract][Full Text] [Related]  

  • 54. [Breast density assessment and organised breast cancer screening].
    Bambara AT; Ouédraogo NA; Ouédraogo PA; Bénao OLB; Ouédraogo W; Savadogo LGB; Ousséini D; Rabiou C
    Bull Cancer; 2023 Sep; 110(9):903-911. PubMed ID: 37468338
    [TBL] [Abstract][Full Text] [Related]  

  • 55. Weighted specific-category kappa measure of interobserver agreement.
    Kvålseth TO
    Psychol Rep; 2003 Dec; 93(3 Pt 2):1283-90. PubMed ID: 14765602
    [TBL] [Abstract][Full Text] [Related]  

  • 56. Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters.
    Li M; Gao Q; Yu T
    BMC Cancer; 2023 Aug; 23(1):799. PubMed ID: 37626309
    [TBL] [Abstract][Full Text] [Related]  

  • 57. Reliability of Patient-Led Screening with the Malnutrition Screening Tool: Agreement between Patient and Health Care Professional Scores in the Cancer Care Ambulatory Setting.
    Di Bella A; Blake C; Young A; Pelecanos A; Brown T
    J Acad Nutr Diet; 2018 Jun; 118(6):1065-1071. PubMed ID: 29398570
    [TBL] [Abstract][Full Text] [Related]  

  • 58. The impact of subjective image quality evaluation in mammography.
    Alukić E; Homar K; Pavić M; Žibert J; Mekiš N
    Radiography (Lond); 2023 May; 29(3):526-532. PubMed ID: 36913787
    [TBL] [Abstract][Full Text] [Related]  

  • 59. Graphical aids for visualizing and interpreting patterns in departures from agreement in ordinal categorical observer agreement data.
    Bangdiwala SI
    J Biopharm Stat; 2017; 27(5):773-783. PubMed ID: 28010186
    [TBL] [Abstract][Full Text] [Related]  

  • 60. Assessing observer agreement in studies involving replicated binary observations.
    Haber M; Gao J; Barnhart HX
    J Biopharm Stat; 2007; 17(4):757-66. PubMed ID: 17613652
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.