A method for the comparison of item selection rules in computerized adaptive testing

Juan Ramón Barrada, Julio Olea, Vicente Ponsoda, Francisco José Abad

Research output: Contribution to journalArticleResearchpeer-review

14 Citations (Scopus)


In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or more selection rules. A plot showing the performance of each selection rule for several maximum exposure rates is obtained and the whole plot is compared with other rule plots. The strategy was applied in a simulation study with fixed-length CATs for the comparison of six item selection rules: the point Fisher information, Fisher information weighted by likelihood, Kullback-Leibler weighted by likelihood, maximum information stratification with blocking, progressive and proportional methods. Our results show that there is no optimal rule for any overlap value or root mean square error (RMSE). The fact that a rule, for a given level of overlap, has lower RMSE than another does not imply that this pattern holds for another overlap rate. A fair comparison of the rules requires extensive manipulation of the maximum exposure rates. The best methods were the Kullback-Leibler weighted by likelihood, the proportional method, and the maximum information stratification method with blocking. © The Author(s) 2010.
Original languageEnglish
Pages (from-to)438-452
JournalApplied Psychological Measurement
Issue number6
Publication statusPublished - 24 Aug 2010


  • computerized adaptive testing
  • item exposure control
  • item selection
  • test security

Fingerprint Dive into the research topics of 'A method for the comparison of item selection rules in computerized adaptive testing'. Together they form a unique fingerprint.

Cite this