KNOWLEDGE ASSESSMENT AND APPLICATION OF COMPUTER ADAPTIVE TESTING

Zoran Petar Čekerevac, Svetlana Andjelić, Petar Zoran Čekerevac

Abstract


Assessment of knowledge and skills of students, and people at all, are everyday practice in today's world. It is implemented in different ways and for that purpose many methods are used. It is impossible to find one general method, the best in all circumstances. The first part of this paper deals with the problems that arise in the evaluation, and the second part analyzes the possibility of using of computer adaptive testing to determine the grades of students’ knowledge. Special attention is paid to the objectivity of evaluation and to the impact of assessors on the final mark. Here are reconsidered examples of grading of the students’ papers presented and defended in an international student’s competition, as well as defending of several students' term papers in the frame of regular university classes. The results were compared, and it was pointed to many of aspects of establishing the objectivity of the obtained scores. In the section dealing with the presentation of the application of computer adaptive testing in the evaluation, there are presented algorithm of testing, research methods and results of the research on concrete examples from practice. Here is, also, made the comparison of the scores that the same students had achieved in two testing modes: classical testing and computerized adaptive testing. In addition to this comparison, this paper presents the results of a survey of the impressions of tested students. In doing so, the questions were focused on the appropriateness of computer adaptive testing and the other impressions that students had gained during testing, satisfaction with the earned mark and so on. The results achieved by applying computer adaptive testing are summarized in the conclusions, and also some advantages and disadvantages of such testing are discussed. At the end, the paper answers the question about the usefulness of evaluations and, also, some questions about the objectivity of the assessment.

Keywords


Knowledge assessment, CAT, computer adaptive testing, estimation of knowledge, Bayes' theorem, MAP approach, spearman rank-order correlation coefficient, grading system

Full Text:

PDF

References


Allan, M. J., Bulla, N., & Goodman, A. S. (2003). TEST ACCESS: Guidlines for Computer Administered Testing. Louisvill, KY: American Printing House for the Blind: Louisville, KY.

Andjelic, S. (2010). A contribution to objective evaluation of students' knowledge useing computer adaptive testing. Krusevac: Faculty of Industrial Management in Krusevac, "Union" University Belgrade.

anon. (1999). Interpreting the Mann-Whitney test. Retrieved 6 8, 2012, from GraphPad Software: http://www.graphpad.com/articles/interpret/analyzing_two_groups/mann_whitney.htm

anon. (2012). Points. Retrieved 6 29, 2012, from Formula 1: http://www.formula1.com/inside_f1/rules_and_regulations/sporting_regulations/8681/

Bergen. (2005). TOWARDS THE EUROPEAN HIGHER EDUCATION AREA. Retrieved 6 8, 2012, from bologna-bergen2005: http://www.bologna-bergen2005.no/Docs/00-Main_doc/010519PRAGUE_COMMUNIQUE.PDF

Clariana, R., & Wallace, P. (2002). Paper-based versus computer-based assessment: key factors associated with the test mode effect. Acta Polytechnica Hungarica, 33(5), 593-602.

Curtis, P. (2009, 07 12). theguardian. Retrieved 06 07, 2012, from Computerised testing likely to replace traditional exams, says head of board: http://www.guardian.co.uk/education/2009/jul/12/written-exams-computerised-testing

EU. (1999). The Bologna Declaration. Retrieved 6 8, 2012, from europedu.org (Sorbonne-Bologna process, French Ministry of Education): http://ec.europa.eu/education/policies/educ/bologna/bologna.pdf

EU. (2001). TOWARDS THE EUROPEAN HIGHER EDUCATION AREA. Retrieved 6 8, 2012, from Onderwijs.vlaanderen.be: http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/MDC/PRAGUE_COMMUNIQUE.pdf

EU. (2003). http://www.bmbf.de/pub/communique_bologna-berlin_2003.pdf. Retrieved 6 8, 2012, from Bologna-Berlin 2003: http://www.bmbf.de/pub/communique_bologna-berlin_2003.pdf

EU. (2007). London Communiqué. Retrieved 6 8, 2012, from Onderwijs.vlaanderen.be: http://www.ond.vlaanderen.be/hogeronderwijs/bologna/documents/MDC/London_Communique18May2007.pdf

EU. (2009). Communiqué of the Conference of European Ministers Responsible for Higher Education, Leuven and Louvain-la-Neuve, 28-29 April 2009. Retrieved 06 08, 2012, from Onderwijs.vlaanderen.be: http://www.ond.vlaanderen.be/hogeronderwijs/bologna/conference/documents/Leuven_Louvain-la-Neuve_Communiqu%C3%A9_April_2009.pdf

Linacre, J. M. (2000). Computer-Adaptive Testing: A Methodology Whose Time Has Come. MESA Memorandum(69).

Maravić-Čisar, S., Radosav, D., Markoski, B., Pinter, R., & Čisar, P. (2010). Computer Adaptive Testing of Student Knowledge. Acta Polytechnica Hungarica, 7(4), 139-152.

Pavlek, S. B. (2005). Bayes’ learning. Retrieved 6 7, 2012, from The Institute of Electronics, Microelectronics, Computer and Intelligent Systems: http://www.zemris.fer.hr/education/ml/nastava/ag20022003/bayesovo_ucenje.ppt

Rudner, L. M. (1998). An On-line, Interactive, Computer Adaptive Testing Mini-Tutorial. O'Boyle Hall, Washington, DC: ERIC Clearinghouse on Assessment and Evaluation.

Rudner, L. M., & Liang, T. (2002, 06). Automated Essay Scoring Using Bayes’ Theorem. The Journal of Technology, Learning, and Assessment, 1(2), 3-21.

Weisstein, E. W. (2012). Spearman Rank Correlation Coefficient. Retrieved 06 29, 2012, from MathWorld--A Wolfram Web Resource: http://mathworld.wolfram.com/SpearmanRankCorrelationCoefficient.html

Zenisky, A. H. (2010). Elements of Adaptive Testing. Statistics for Social and Behavioral Sciences. Multistage Testing: Issues, Designs, and Research., 355-372.


Refbacks

  • There are currently no refbacks.