These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

183 related articles for article (PubMed ID: 26095449)

  • 21. Midwives' visual interpretation of intrapartum cardiotocographs: intra- and inter-observer agreement.
    Devane D; Lalor J
    J Adv Nurs; 2005 Oct; 52(2):133-41. PubMed ID: 16164474
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Interobserver agreement: Cohen's kappa coefficient does not necessarily reflect the percentage of patients with congruent classifications.
    Steinijans VW; Diletti E; Bömches B; Greis C; Solleder P
    Int J Clin Pharmacol Ther; 1997 Mar; 35(3):93-5. PubMed ID: 9088995
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Inter-rater reliability of seven neurolaryngologists in laryngeal EMG signal interpretation.
    Ho GY; Leonhard M; Volk GF; Foerster G; Pototschnig C; Klinge K; Granitzka T; Zienau AK; Schneider-Stickler B
    Eur Arch Otorhinolaryngol; 2019 Oct; 276(10):2849-2856. PubMed ID: 31312924
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Inter-observer agreement according to three methods of evaluating mammographic density and parenchymal pattern in a case control study: impact on relative risk of breast cancer.
    Winkel RR; von Euler-Chelpin M; Nielsen M; Diao P; Nielsen MB; Uldall WY; Vejborg I
    BMC Cancer; 2015 Apr; 15():274. PubMed ID: 25884160
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A unified approach for assessing agreement for continuous and categorical data.
    Lin L; Hedayat AS; Wu W
    J Biopharm Stat; 2007; 17(4):629-52. PubMed ID: 17613645
    [TBL] [Abstract][Full Text] [Related]  

  • 26. The agreement chart.
    Bangdiwala SI; Shankar V
    BMC Med Res Methodol; 2013 Jul; 13():97. PubMed ID: 23890315
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Hubert's multi-rater kappa revisited.
    Martín Andrés A; Álvarez Hernández M
    Br J Math Stat Psychol; 2020 Feb; 73(1):1-22. PubMed ID: 31056757
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Kappa-like indices of observer agreement viewed from a latent class perspective.
    Guggenmoos-Holzmann I; Vonk R
    Stat Med; 1998 Apr; 17(8):797-812. PubMed ID: 9595612
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Homogeneity score test of AC
    Honda C; Ohyama T
    BMC Med Res Methodol; 2020 Feb; 20(1):20. PubMed ID: 32020851
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Clinicians are right not to like Cohen's κ.
    de Vet HC; Mokkink LB; Terwee CB; Hoekstra OS; Knol DL
    BMJ; 2013 Apr; 346():f2125. PubMed ID: 23585065
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Modeling agreement on categorical scales in the presence of random scorers.
    Vanbelle S; Lesaffre E
    Biostatistics; 2016 Jan; 17(1):79-93. PubMed ID: 26395905
    [TBL] [Abstract][Full Text] [Related]  

  • 32. Assessing method agreement for paired repeated binary measurements administered by multiple raters.
    Wang W; Lin N; Oberhaus JD; Avidan MS
    Stat Med; 2020 Feb; 39(3):279-293. PubMed ID: 31788847
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Measuring Agreement Using Guessing Models and Knowledge Coefficients.
    Moss J
    Psychometrika; 2023 Sep; 88(3):1002-1025. PubMed ID: 37291419
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Inter- and intra-rater agreement in the assessment of the vascularity of spinal metastases using digital subtraction angiography tumor blush.
    Clausen C; Dahl B; Christiansen Frevert S; Forman JL; Nielsen MB; Lönn L
    Acta Radiol; 2017 Jun; 58(6):734-739. PubMed ID: 27650032
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Inter-Rater Agreement Estimates for Data With High Prevalence of a Single Response.
    Waugh SM; He J
    J Nurs Meas; 2019 Aug; 27(2):152-161. PubMed ID: 31511402
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate?
    Zapf A; Castell S; Morawietz L; Karch A
    BMC Med Res Methodol; 2016 Aug; 16():93. PubMed ID: 27495131
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Measures of Agreement with Multiple Raters: Fréchet Variances and Inference.
    Moss J
    Psychometrika; 2024 Jun; 89(2):517-541. PubMed ID: 38190018
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Kappa statistics to measure interrater and intrarater agreement for 1790 cervical biopsy specimens among twelve pathologists: qualitative histopathologic analysis and methodologic issues.
    Malpica A; Matisic JP; Niekirk DV; Crum CP; Staerkel GA; Yamal JM; Guillaud MH; Cox DD; Atkinson EN; Adler-Storthz K; Poulin NM; Macaulay CA; Follen M
    Gynecol Oncol; 2005 Dec; 99(3 Suppl 1):S38-52. PubMed ID: 16183106
    [TBL] [Abstract][Full Text] [Related]  

  • 39. [Analyzing interrater agreement for categorical data using Cohen's kappa and alternative coefficients].
    Wirtz M; Kutschmann M
    Rehabilitation (Stuttg); 2007 Dec; 46(6):370-7. PubMed ID: 18188809
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Weighted least-squares approach for comparing correlated kappa.
    Barnhart HX; Williamson JM
    Biometrics; 2002 Dec; 58(4):1012-9. PubMed ID: 12495157
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 10.