Assessment of the Parameters of Delta State Basic Education Certificate Mathematics Examination Using the 3-Parameter Logistic Model of Item Response Theory
Abstract
The study assessed the parameters of Delta State Basic Education Certificate mathematics examination using the 3-parameter logistic model of item response theory. The aim was to assess item difficulty, discrimination and pseudo guessing parameters of the Delta State Basic Education Certificate Mathematics Examination using the 3-parameter item response theory model. Three research questions guided the study. The multiple triangulation research design was used. The population comprised 140,959 Junior Secondary School (JSS) 3 students in the 2022/2023 academic session in Delta State. A sample of 1,000 students was selected through a multistage sampling procedure. The main instrument that was used for the study is the Mathematics Achievement Test (MAT) used in the Delta State Basic and Education Certificate in the 2022. The a, b and c parameters item response theory dichotomous models were used to answer research question 1, 2 and 3 respectively. The findings of the study revealed that some of the items (29 out of 60) are very difficult, some (11 out of 60) were difficult, while others (20 out 60) were moderately difficult; that one item out of 60 items has poor quality, four out of 60 items are marginally satisfactory; most (23 out of 60) of the items are moderate; some (15 out of 60 items) are good; while the rest 17 out 60 items are satisfactory; and that some of the items in the test (12 out of 60) had a guessing index exceeding the recommended 0.25 threshold; while the remaining items (48 out of 60) exhibited guessing indices ranging from 0.005 to 0.24. The finding also showed that the reliability of the MAT instrument is high when compared within the framework of IRT as indicated by the value is 0.70; that majority of the items in the test (55 out of 60) have a good fit in the overall model while 5 did not have a good fit; and that majority of the items in the test measured a single construct, as shown in the scree plot. The study recommended amongst others, that examination bodies responsible for test development should consider revising the test to ensure a more balanced distribution of difficulty levels. This can help mitigate potential biases and ensure that the test effectively assesses a wide range of abilities.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.