These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: Reviewing the quality of discourse information measures in aphasia.
    Author: Pritchard M, Hilari K, Cocks N, Dipper L.
    Journal: Int J Lang Commun Disord; 2017 Nov; 52(6):689-732. PubMed ID: 28560767.
    Abstract:
    BACKGROUND: Discourse is fundamental to everyday communication, and is an increasing focus of clinical assessment, intervention and research. Aphasia can affect the information a speaker communicates in discourse. Little is known about the psychometrics of the tools for measuring information in discourse, which means it is unclear whether these measures are of sufficient quality to be used as clinical outcome measures or diagnostic tools. AIMS: To profile the measures used to describe information in aphasic discourse, and to assess the quality of these measures against standard psychometric criteria. METHODS & PROCEDURES: A scoping review method was employed. Studies were identified using a systematic search of Scopus, Medline and Embase databases. Standard psychometric criteria were used to evaluate the measures' psychometric properties. MAIN CONTRIBUTION: The current review summarizes and collates the information measures used to describe aphasic discourse, and evaluates their quality in terms of the psychometric properties of acceptability, reliability and validity. Seventy-six studies described 58 discourse information measures, with a mean of 2.28 measures used per study (SD = 1.29, range = 1-7). Measures were classified as 'functional' measures (n = 33), which focused on discourse macrostructure, and 'functional and structural' measures (n = 25), which focused on micro-linguistic and macro-structural approaches to discourse. There were no reports of the acceptability of data generated by the measures (distribution of scores, missing data). Test-retest reliability was reported for just 8/58 measures with 3/8 > 0.80. Intra-rater reliability was reported for 9/58 measures and in all cases percentage agreement was reported rather than reliability. Per cent agreement was also frequently reported for inter-rater reliability, with only 4/76 studies reporting reliability statistics for 12/58 measures; this was generally high (>.80 for 11/12 measures). The majority of measures related clearly to the discourse production model indicating content validity. A total of 36/58 measures were used to make 41 comparisons between participants with aphasia (PWA) and neurologically healthy participants (NHP), with 31/41 comparisons showing a difference between the groups. Four comparisons were made between discourse genres, with two measures showing a difference between genres, and two measures showing no difference. CONCLUSIONS: There is currently insufficient information available to justify the use of discourse information measures as sole diagnostic or outcome measurement tools. Yet the majority of measures are rooted in relevant theory, and there is emerging evidence regarding their psychometric properties. There is significant scope for further psychometric strengthening of discourse information measurement tools.
    [Abstract] [Full Text] [Related] [New Search]