Item Response Modeling of Multivariate Count Data with Zero-Inflation, Maximum Inflation, and Heaping Public Deposited

Downloadable Content

Download PDF
Last Modified
  • March 19, 2019
Creator
  • Magnus, Brooke
    • Affiliation: College of Arts and Sciences, Department of Psychology and Neuroscience
Abstract
  • Questionnaires that include items eliciting count responses are becoming increasingly common in psychology and health research. Item response data from these types of questionnaires pose analytic challenges, including inflation at zero and the maximum, as well as heaping at preferred digits; such data complexities are not well-suited for conventional IRT modeling approaches and software. This research proposes methodological techniques to overcome those challenges by combining approaches from three related but distinct literatures: IRT models for multivariate count data, latent variable models for heaping and extreme responding, and mixture IRT models. Scales from the Behavioral Risk Factor Surveillance System are used as motivating examples in addressing three questions. First, what are some methods of addressing inflation and heaping in multivariate count item response data? Second, are complex models really needed, or can heaping and inflation in count data be ignored? And finally, what value do count items add to scales? The results suggest that count item response data can be modeled within a latent class IRT framework. The proposed latent class IRT model has a Poisson or negative binomial component for a class of individuals who respond to items according to a strict count process, a nominal response component for a class of individuals who respond to items according to a multiple choice or rounding process, and two degenerate models to describe some of the individuals who always endorse the minimum or maximum counts. A comparison of the full model with more parsimonious models reveals that all four latent classes are needed to describe the empirical item response distributions. Methods of computing scale scores are described. The results also provide evidence that including count items on scales may improve measurement precision, but the degree of improvement is dependent on latent class membership. Count items are likely to be most informative when respondents engage in a true count process. The results also support the idea that if count items are to be used on scales, it is advisable to include more than one. Practical implications are discussed and recommendations are provided for researchers who may wish to use count items on questionnaires.
Date of publication
Keyword
DOI
Resource type
Rights statement
  • In Copyright
Advisor
  • Thissen, David
  • Curran, Patrick
  • Castro-Schilo, Laura
  • Preisser, John
  • Youngstrom, Eric
Degree
  • Doctor of Philosophy
Degree granting institution
  • University of North Carolina at Chapel Hill Graduate School
Graduation year
  • 2016
Language
Parents:

This work has no parents.

Items