Computer Assessment for Secondary School Tests

Authors

  • Sello QM Lecturer Computer Science Dept., University of Botswana
  • Totev DM Lecturer Dept. Maths-Science Education, University of Botswana
  • Rose Kgosiemang Senior-Librarian, University of Botswana
  • Liu Y Lecturer Dept. Maths-Science Education, University of Botswana

Keywords:

Computer based grading, Multiple-choice, questions, Student, assessment, Secondary schools, Manual Marking

Abstract

Secondary school tests are very important component of the student’s assessment process and their frequency is directly related to the success of year examinations. On the other hand, the number of students could restrict the frequency of tests that is not desirable in terms of quality educations. Obviously such a dilemma is a challenging methodological and technological problem. In such cases a Multiple Choice test paper could be the only feasible solution, taking into account staff and time constraints. There are a few ways of marking such papers, mainly: a fully manual procedure with preset answer sheets (punched templates); fully automated marking process, applying optical character recognition (OCR) and scanners; the use of special pens and answer sheets could reduce to a certain extent the human factor involvement. The first case is time consuming, error prone and the stress under which the staff involved is working, contributes additionally to the relatively low quality of marking. The other two techniques require additional investment and technological infrastructure, but the marking process is significantly improved (Mogey, N. Watt, H. 2009). Similar results could be obtained with the demonstrated software project, whose main features are as follows:

- No additional investment involved.

- Significantly improved accuracy of mark calculations

– fully automated marking procedure.

- Reduced stress factor

– the computer keyboard is used in the most convenient way.

- Better synchronization between Marking and Quality Assurance staff.

- Significantly improved accuracy of the moderated scripts.

- Improved record keeping of grades that is individualized and easy to track and manage.

- Immediate statistical analysis of results.

- Improved record keeping with respect to the school archives.

- Registered error tolerance less than 1.5%.

- Ability for marked and moderated work to be accessed based on users pre-determined rights.

- Multi-level hierarchical approach to data security and staff responsibility structure (www.tcexam.com, 2009).

References

. Botswana. Ministry of Education Department of Curriculum Development and Evaluation. (2001). Botswana general certificate of secondary education teaching syllabus Agriculture. Gaborone

. Evaluation. (1996). Botswana. Home Economics: three year junior certificate programme. Gaborone.

. Botswana. Ministry of Education Department of Curriculum Development and Evaluation. (1998). Moral Education: three year junior certificate programme. Gaborone.

. Botswana. Ministry of Education. (1994). Teachers handbook on criterion-referenced testing and continuous assessment. Gaborone.

. Chase, C.I. (1986). Essay test scoring: interaction of relevant variables. Journal of Educational Measurement

. www.scantronform.com, (2009), Home Page, Scantron Forms, Optical Mark Scanner.

. e-Government Research Group, (2004), University of Botswana (2004), Design and Development of e-Education Materials, 4th International Conference on On-line Learning - ICOOL 2004, p. 23, 25.

. Gronlund, N. (1968). Constructing achievement tests. Englewood cliffs, N.J.Prentice-Hall.

. Haladyna, T. M. (1994). Developing and validating multiple-choice test items. Hillsdale, N.J: Lawrence Erlbaum Associates

. Lee, G. And Weerakoon, P. (2001). The role of computer aided assessment in health professional education: a comparison of student performances in computer-based paper and pen multiple-choice test. Retrieved September 26, 2009, from Medical teacher 2, 152-157 Vol. 23,p v2.

. Lukhele, R. Thissen D. and Wainer, H. (1992). On the relative value of multiple-choice, constructed-response, and examinee-selected items on two achievement tests. Paper presented at the Educational Research Association, San Francisco. p.27

. Mogey, N. Watt, H. (2009) the use of computers in the assessment of student learning, Learning Technology Dissemination Initiative. Retrieved September 25, 2009, HTML by Phil Baker, p1-11.

. National Association of State Boards of Education, (2001). A primer on state accountability and large-scale assessments.

. Ranku, G. (2001). An exploratory survey of teacher-designed tests used in Junior Secondary Schools in Gaborone. Unpublished theses. P. 24

. Rosa, K. (2001). Item response theory applied to combinations of multiple-choice and constructed-response items – scale scores for patterns of summed scores. In: Thessen, David, Wainer, Howard (2001). Test scoring. Mahwah, N.J.Lawrence Erlbaum Associates.p.253.

. http://www.nasbe.org, (2009), National Association of State Boards of Education, Home Page, Educationa_Issues/Reports/Assessment.pdf 2009

. www.tcexam.com, (2009), Home Page, CBT - Computer–Based Testing, CBA – Computer-Based Assessment.

Downloads

Published

2024-02-26

How to Cite

Sello, Q. M., Totev, D. M., Kgosiemang, R., & Liu, Y. (2024). Computer Assessment for Secondary School Tests. COMPUSOFT: An International Journal of Advanced Computer Technology, 2(11), 350–359. Retrieved from https://ijact.in/index.php/j/article/view/61

Issue

Section

Original Research Article

Similar Articles

<< < 19 20 21 22 23 24 25 26 27 28 > >> 

You may also start an advanced similarity search for this article.