Exploring the application of nonparametric item response theory techniques to alternate assessments

Stephen Cubbellotti, Fordham University

Abstract

Currently, AA-AAS populations are, understandably, viewed as a single population. However, there is little research to show that this is an appropriate assumption, and as a result the calibration of how well these items work together to provide an accurate score for the students may be inaccurate. Clark & Watson (1995; p.315) state "it may be desirable to retain items that assess important construct-relevant information in one type of sample, even if they have extremely unbalanced distributions (and relatively poor psychometric properties) in others." The intent of this research was to highlight the appropriateness of retaining certain items for students with different disabilities and viewing them as separate populations. One aspect of this study aimed to highlight the places where there is a substantial deviation from normality in the score distribution In addition to looking at total score, individual items were assessed for differences in item response patterns among identified subgroups and the potential implications of issues in writing items for those subgroups. In addition to the role of subgroups, level of independence and its effect on scores and dimensionality were explored. Finally, this research highlighted issues with item properties or objectives across different states and suggest rationales for why there may be difficulty in developing items to meet the objective criteria. Three states across two contents areas and six grades with approximately 30 items averaging about 600 participants on each test were analyzed. Total score averages and standard deviations, point-biserial correlations, DIF, kappa, and several person-fit and item-fit measures were calculated. Clear trends potentially indicate that there are two groups being measured in these tests, a less cognitively related primary clinical diagnosis (PCD) group and a more cognitively related PCD group. Since this was an exploratory study, it is recommended that a confirmatory analysis be conducted. If confirmed, two separate tests for each group or analysis using multiple-group item response theory may provide a better measure of the ability of these two groups.

Subject Area

Educational tests & measurements

Recommended Citation

Cubbellotti, Stephen, "Exploring the application of nonparametric item response theory techniques to alternate assessments" (2013). ETD Collection for Fordham University. AAI3588209.
https://research.library.fordham.edu/dissertations/AAI3588209

Share

COinS