Item Response Theory (IRT) is widely used in psychometrics to model the relationship between an individual’s latent ability and their responses to test items. In this post, I’ll walk through a basic example of fitting an IRT model to simulated data and visualizing item characteristics.
Simulated Data
We’ll generate simulated test response data for 200 respondents and 10 items using the mirt package in R.
# Simulate item response datalibrary(mirt)
Loading required package: stats4
Loading required package: lattice
set.seed(123)n_items <-10n_people <-200sim_data <-simdata(a =rep(1, n_items), d =seq(-2, 2, length.out = n_items), N = n_people, itemtype ="2PL")head(sim_data)
Item characteristic curves (ICCs) show the probability of a correct response as a function of latent ability. Let’s plot the ICCs for our items.
Visualizing Test Information
The test information curve shows how much information the test provides across different levels of ability. Let’s visualize this next.
Insights
From these analyses, we can observe:
Item Difficulty: Items with higher difficulty thresholds are more informative at higher ability levels.
Test Information: The test provides the most information for respondents with abilities around 0, which aligns with the range of item difficulties.
This example demonstrates how IRT can be used to evaluate test items and understand the relationship between ability and response patterns.
Conclusion
IRT is a powerful tool for understanding test performance and optimizing assessments. Future posts will dive deeper into differential item functioning (DIF) and other advanced topics in psychometric analysis.