Collections > Scholarly Posters and Presentations > A Summary of Peer-Reviewed Psychometric Evaluations of Assessments for Post-Stroke Aphasia
pdf

Purpose: The purpose of this systematic review is to summarize the amount of peer-reviewed quantitative information about the psychometric properties of assessments for aphasia. Background: When assessing people for aphasia, clinicians have many instruments from which to choose. The majority of the manuals for these tools are not peer-reviewed, calling into question the trustworthiness of psychometric properties reported. Efforts have been made to describe the psychometric concurrence of different tools, but description of the quantity of peer-reviewed information available has yet to be published (Skenes et al., 1985). When choosing an assessment, varying clinical situations demand different psychometric profiles. In the case of a long-term rehab patient, high intra-rater and test-retest reliability are paramount whereas acute-stage assessment requires high validity to ensure an accurate diagnosis. Methods: The authors used the following search terms: aphasia, diagnostic, evaluat*, assess*, test, tool, instrument, scale, battery, schedule, reliability, validity, psychometrics. The authors sought diagnostic or descriptive studies in which quantitative psychometric properties were established. The authors excluded screenings, assessments for apraxia of speech, and studies using participants with primary progressive aphasia. Psychometric evaluations of single items from assessments and non-binary comparisons (e.g., studies in which a psychometric property was measured across multiple assessments) were excluded. Tests and articles were excluded if their original language of publication was not English. The authors completed the study exclusion task for all articles obtained and resolved differences by consensus, resulting in 84 articles. Full-text review then was completed using consensus for differences, resulting in 14 articles for review. Appraisal was completed for the 14 articles by both authors using consensus for differences, and data extraction was performed by the authors simultaneously. In data extraction, the authors included only quantitative measures of reliability and validity. Results: Study appraisal resulted in overall ratings ranging from lesser to good quality. No psychometric properties were reported by more than one study for any test. Test-retest reliability was the most often reported measure (8/12 assessments), followed by inter-rater reliability (5/12), concurrent validity (5/12), and internal consistency (5/12). Discussion: While unsurprising, the lack of peer-reviewed, quantitative information available regarding the psychometric properties of assessments for aphasia is problematic. Many of these assessments are performed at multiple stages of post-stroke recovery, requiring high temporal reliability and, in the acute stage, good validity. While this information can be obtained from manuals, the methodology may be questionable since manuals are not peer reviewed. Additionally, comparison of psychometric properties through manuals is not financially possible for speech-language pathologists in most settings. Further evaluation of established, often used aphasia assessments is needed to enable clinicians to choose the most psychometrically appropriate tools for each situation.