MLE vs. Bayesian item exposure in non-cognitive type adaptive assessments with restricted item pools: Trait estimation, item selection and reliability
While historically computerized adaptive tests (CAT) have been a major testing administration technique in cognitive fields (such as aptitude tests), the past decade has seen the development and application of CAT in non-cognitive domains such as quality of life evaluations, personnel selection and personality assessments. Although there is no question of test security as with cognitive assessments, properly managing item exposure in non-cognitive type CATs becomes an issue of test validity, specifically in terms of test-retest situations with limited subject populations. In this research five item selection/item exposure control algorithms were evaluated within a frequentist Maximum Likelihood Estimation (MLE) and a Bayesian (EAP) assessment framework, using two restricted item pool sizes. The effects of algorithm, evaluation framework and item pool size combinations on latent trait recovery, item pool selection/usage and test reliability were evaluated. Results showed that MLE algorithm alternatives would be better suited for instruments evaluating opinions and likings, while EAP versions would perform better in diagnostic assessments. Moreover, limited item pool sizes did not disadvantage CAT routines, providing adequately constructed item pools and realistic test management objectives. Results also revealed that although the mixture item exposure control algorithms (versions of the Progressive Restricted Maximum Information procedure) made best usage of the item pools, respective implementations and trade-offs in terms of conditional exposure control were suboptimal. Guidelines and recommendations for test management are discussed. ^
Chajewski, Michael, "MLE vs. Bayesian item exposure in non-cognitive type adaptive assessments with restricted item pools: Trait estimation, item selection and reliability" (2011). ETD Collection for Fordham University. AAI3495853.