13.1.6.2
Criterion-referencing |
|||||||
|
|||||||
Criterion-referenced assessment, on the other hand, involves awarding marks in relation to a predefined objective or criterion, independently of how all the other test candidates have performed. The most common example quoted is the driving test where everyone needs to achieve a minimum threshold of competence, otherwise they will not be allowed to drive on their own. And this threshold remains the same regardless of how many pass or fail on a single day or in a single year. Performance is measured purely against certain clearly specified criteria, not against other novice drivers' performance. In language testing, an example of a criterion for a basic pass mark in an oral exam might be:
Examples of assessment criteria for two different types of language task can be found in Appendix 2 and more composite criteria grids for speaking and writing from the Open University's language programmes are listed in Appendix 3. From the student perspective, one of the key differences between norm- and criterion-referencing is that in the former it is impossible to know what mark one needs to achieve in order to obtain a particular grade; in the latter, by contrast, not only is this information vouchsafed but students are given a description of those attributes the marker is looking for at each grade. From the tutor's point of view, a major difference is that in a norm-referenced test, in which an 'A' is awarded to the top 10% only, an exam set at too high or low a standard is less of a problem, since grades are allocated purely on a relative basis: if the top 10% of marks one year are all over 80, whereas no one scores over 75 the next, this is not especially important. In a criterion-referenced system more care has to be taken both in setting the paper and in allocating marks strictly in accordance with the agreed criteria. There is less room for exams of variable quality in a true criterion-referenced system, which is why so much time has to be invested in producing successive drafts of each paper by awarding bodies as they strive to ensure public exams, such as GCSEs, are pitched at the right level each year. (For further discussion on the subject, see Knight, 2001: 16-19.)
|
|||||||
|
|||||||
|
|||||||