You are not currently logged in.
Access JSTOR through your library or other institution:
The Consistency of Detecting Item Bias across Different Test Administrations: Implications of Another Failure
Gary Skaggs and Robert W. Lissitz
Journal of Educational Measurement
Vol. 29, No. 3 (Autumn, 1992), pp. 227-242
Published by: National Council on Measurement in Education
Stable URL: http://www.jstor.org/stable/1435136
Page Count: 16
You can always find the topics here!Topics: Test bias, Sampling bias, Educational administration, Research biases, Statistics, Statistical bias, Educational research, Gender bias, Meetings, Curricula
Were these topics helpful?See something inaccurate? Let us know!
Select the topics that are inaccurate.
Preview not available
Several item bias detection methods were applied to the analysis of bias among males and females for items from a curriculum-based mathematics test. The focus of this analysis was the consistency of the methods across different test administrations of the same items. The results indicated that, of the methods studied, the Mantel-Haenszel (M-H) and IRT-based sum-of-squares methods were the most consistent. However, the degree of reliability and agreement for these methods was modest at best. As with most prior research, no reasonable explanation could be found for the most consistently flagged items. A likely reason for this lies in the confusion of visible genetic group characteristics with their instructional backgrounds. A multidimensional perspective of item bias is proposed for future research that will take such confounding into account.
Journal of Educational Measurement © 1992 National Council on Measurement in Education