These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.
Pubmed for Handhelds
PUBMED FOR HANDHELDS
Search MEDLINE/PubMed
Title: Inter- and intrarater reliability of retrospective drug utilization reviewers. Author: Zuckerman IH, Mulhearn DM, Metge CJ. Journal: J Am Pharm Assoc (Wash); 1999; 39(1):45-9. PubMed ID: 9990187. Abstract: OBJECTIVE: To assess inter- and intrarater reliability among 23 pharmacist and physician retrospective drug utilization reviewers and to assess interrater reliability after a reviewer training session. DESIGN: Exploratory study. SETTING: Maryland Medicaid's retrospective drug utilization review (DUR) program. PARTICIPANTS: 23 physician and pharmacist retrospective drug utilization reviewers. INTERVENTIONS: None. MAIN OUTCOME MEASURES: Profiles rated as "intervention indicated" or "intervention not indicated." Cochran's Q test, overall percent agreement, and the unweighted kappa statistic were used in the analysis of review consistency. RESULTS: Intrarater reliability showed substantial consistency among the 23 reviewers; the percent agreement was 82.9% with kappa = 0.66. Interrater reliability, however, was poor, with an overall agreement of 69.6% and kappa = 0.16. Interrater reliability was also poor after a one-hour reviewer training session (agreement 81.8%, kappa = -0.19). CONCLUSION: The implicit review process used in the retrospective DUR program that we evaluated was unreliable. Since reliability is a necessary but not sufficient condition for validity of an indicator of inappropriate drug use, the validity of the DUR implicit review process is in question.[Abstract] [Full Text] [Related] [New Search]