14.1.3
Discrete-point tests versus integrative tests |
|||||||
|
|||||||
An example of this 'integrative' testing would be a task requiring students to listen to a simple telephone message and to draft a memo in reply. This involves the linguistic elements of phonology, grammar, vocabulary and discourse, as well as the skills of listening and writing. Or, again, a task requiring learners to read an e-mail in the FL and relay its contents orally to another person on the telephone, involves an equally wide range of linguistic elements, as well as the skills of reading and speaking. To refer back to our earlier distinction, discrete-point tests are normally indirect, while integrative tests are usually direct. Activity 3
One interesting variation on integrative tests was Oller's (1979) development of cloze tests (see section 14.5.1.1) which, along with C-tests (see section 14.5.1.2), represent exceptions to the direct-integrative and the indirect-discrete alignment, being essentially indirect tests that integrate a range of language skills and linguistic elements. In accordance with the so-called 'Unitary Competence Hypothesis' (cf 14.5.1), Oller believed that 'performance on a whole range of tests […] depended on the same underlying capacity in the learner - the ability to integrate grammatical, lexical, contextual and pragmatic knowledge in test performance' (Oller, 1979: 15). It was thought the cloze test was capable of measuring this competence. Although they are still quite widely used, it has been found, in fact, that cloze tests tend to measure much the same things as discrete tests of grammar and vocabulary. Proponents of objective tests would argue that integrative tests can lead to unreliable marking since they frequently require a degree of subjective judgement on the part of the marker. However, detailed and thoroughly trialled criteria (see, for example, Appendices 1-3), allied to the standardizing procedures for teams of markers employed by organizations involved in large-scale testing, such as the secondary awarding bodies or The Open University, ensure a good degree of transparency and high reliability on direct tests involving speaking and writing. Student numbers in conventional university language assessment may not warrant such detailed procedures, but the same tried and tested criteria, thorough and agreed mark schemes, moderation of marks and/or second marking should ensure similar reliability. Ultimately, the choice of direct versus indirect testing will be a pedagogical one. As noted in Module 13, section 13.1.3, all assessment should be closely related to learning outcomes and should have the effect of a positive washback on classroom learning and teaching (see Module 13, section 13.1.4.5). As Bailey concludes (1998: 83):
As this suggests, the apparent direct-indirect and integrative-discrete dichotomy is no such thing. The terms should rather be seen as denoting points on a spectrum: depending on its purpose, a language test will be located closer to one end or the other, while a single examination might well feature test elements from various points on the spectrum. Activity 4
|
|||||||
|
|||||||
|
|||||||