Examining an online content general outcome measure: Technical features of the static score
Document Type
Article
Publication Date
9-1-2013
Abstract
The purpose of this study was to evaluate technical adequacy features of an online adaptation of vocabulary matching known as critical content monitoring. Validity and reliability studies were conducted with a sample of 106 students from one school in fifth-grade science content. Participants were administered 20 parallel forms of the general outcome measure over a 2-week period. Criterion-related validity correlations with a statewide accountability test ranged from .36 to .55 for the 20 parallel forms. A pooled estimate of a common correlation between the state test and the probes was found to be .45. While correlation differences were not found, statistically significant differences in probe mean scores were identified across the body of parallel forms. Student commentary regarding the online assessment process was largely positive. Alternate-form reliability correlations ranged from .21 to .73, with a median correlation of .56. Limitations and implications are addressed. © Hammill Institute on Disabilities 2013.
Publication Source (Journal or Book title)
Assessment for Effective Intervention
First Page
249
Last Page
260
Recommended Citation
Mooney, P., McCarter, K., Russo, R., & Blackwood, D. (2013). Examining an online content general outcome measure: Technical features of the static score. Assessment for Effective Intervention, 38 (4), 249-260. https://doi.org/10.1177/1534508413488794