These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

999 related articles for article (PubMed ID: 884196)

  • 1. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers.
    Landis JR; Koch GG
    Biometrics; 1977 Jun; 33(2):363-74. PubMed ID: 884196
    [No Abstract]   [Full Text] [Related]  

  • 2. Kappa statistics in the assessment of observer variation: the significance of multiple observers classifying ankle fractures.
    Thomsen NO; Olsen LH; Nielsen ST
    J Orthop Sci; 2002; 7(2):163-6. PubMed ID: 11956974
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Pathologists should probably forget about kappa. Percent agreement, diagnostic specificity and related metrics provide more clinically applicable measures of interobserver variability.
    Marchevsky AM; Walts AE; Lissenberg-Witte BI; Thunnissen E
    Ann Diagn Pathol; 2020 Aug; 47():151561. PubMed ID: 32623312
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Reproducibility of histomorphologic diagnoses with special reference to the kappa statistic.
    Svanholm H; Starklint H; Gundersen HJ; Fabricius J; Barlebo H; Olsen S
    APMIS; 1989 Aug; 97(8):689-98. PubMed ID: 2669853
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Interobserver and intraobserver reliability in the load sharing classification of the assessment of thoracolumbar burst fractures.
    Dai LY; Jin WJ
    Spine (Phila Pa 1976); 2005 Feb; 30(3):354-8. PubMed ID: 15682019
    [TBL] [Abstract][Full Text] [Related]  

  • 6. The measurement of observer agreement for categorical data.
    Landis JR; Koch GG
    Biometrics; 1977 Mar; 33(1):159-74. PubMed ID: 843571
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Measures of interrater agreement.
    Mandrekar JN
    J Thorac Oncol; 2011 Jan; 6(1):6-7. PubMed ID: 21178713
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Inter-observer variation in assessment of undescended testis. Analysis of kappa statistics as a coefficient of reliability.
    Olsen LH
    Br J Urol; 1989 Dec; 64(6):644-8. PubMed ID: 2576391
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Observer variation in assessment of liver biopsies including analysis by kappa statistics.
    Theodossi A; Skene AM; Portmann B; Knill-Jones RP; Patrick RS; Tate RA; Kealey W; Jarvis KJ; O'Brian DJ; Williams R
    Gastroenterology; 1980 Aug; 79(2):232-41. PubMed ID: 7399228
    [TBL] [Abstract][Full Text] [Related]  

  • 10. [Cohen's kappa – a measure of agreement between observers].
    Lydersen S
    Tidsskr Nor Laegeforen; 2018 Mar; 138(5):. PubMed ID: 29513468
    [No Abstract]   [Full Text] [Related]  

  • 11. Inter-observer and intra-observer variability of the Oxford clinical cataract classification and grading system.
    Sparrow JM; Ayliffe W; Bron AJ; Brown NP; Hill AR
    Int Ophthalmol; 1988 Jan; 11(3):151-7. PubMed ID: 3417387
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Statistical methods in epidemiology. v. Towards an understanding of the kappa coefficient.
    Rigby AS
    Disabil Rehabil; 2000 May; 22(8):339-44. PubMed ID: 10896093
    [TBL] [Abstract][Full Text] [Related]  

  • 13. [The use of kappa in the study of variability between observers].
    Veldhuyzen van Zanten SJ; Hijdra A
    Ned Tijdschr Geneeskd; 1988 Jan; 132(5):199-202. PubMed ID: 3340243
    [No Abstract]   [Full Text] [Related]  

  • 14. Palpation of the femoral and popliteal pulses: a study of the accuracy as assessed by agreement between multiple observers.
    Myers KA; Scott DF; Devine TJ; Johnston AH; Denton MJ; Gilfillan IS
    Eur J Vasc Surg; 1987 Aug; 1(4):245-9. PubMed ID: 3454755
    [TBL] [Abstract][Full Text] [Related]  

  • 15. [Measurement of agreement between 2 judges. Qualitative cases].
    Fermanian J
    Rev Epidemiol Sante Publique; 1984; 32(2):140-7. PubMed ID: 6484261
    [TBL] [Abstract][Full Text] [Related]  

  • 16. The statistical analysis of kappa statistics in multiple samples.
    Donner A; Klar N
    J Clin Epidemiol; 1996 Sep; 49(9):1053-8. PubMed ID: 8780616
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Inter- and intra-observer agreement in the assessment of the quality of spontaneous movements in the newborn.
    van Kranen-Mastenbroek V; van Oostenbrugge R; Palmans L; Stevens A; Kingma H; Blanco C; Hasaart T; Vles J
    Brain Dev; 1992 Sep; 14(5):289-93. PubMed ID: 1456381
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Reliability studies of diagnostic tests are not using enough observers for robust estimation of interobserver agreement: a simulation study.
    Sadatsafavi M; Najafzadeh M; Lynd L; Marra C
    J Clin Epidemiol; 2008 Jul; 61(7):722-7. PubMed ID: 18486446
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Modelling observer agreement--an alternative to kappa.
    May SM
    J Clin Epidemiol; 1994 Nov; 47(11):1315-24. PubMed ID: 7722568
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Consistency in the observation of features used to classify duct carcinoma in situ (DCIS) of the breast.
    Douglas-Jones AG; Morgan JM; Appleton MA; Attanoos RL; Caslin A; Champ CS; Cotter M; Dallimore NS; Dawson A; Fortt RW; Griffiths AP; Hughes M; Kitching PA; O'Brien C; Rashid AM; Stock D; Verghese A; Williams DW; Williams NW; Williams S
    J Clin Pathol; 2000 Aug; 53(8):596-602. PubMed ID: 11002762
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 50.