14.1.3
Discrete-point tests versus integrative tests

An example of this 'integrative' testing would be a task requiring students to listen to a simple telephone message and to draft a memo in reply. This involves the linguistic elements of phonology, grammar, vocabulary and discourse, as well as the skills of listening and writing. Or, again, a task requiring learners to read an e-mail in the FL and relay its contents orally to another person on the telephone, involves an equally wide range of linguistic elements, as well as the skills of reading and speaking. To refer back to our earlier distinction, discrete-point tests are normally indirect, while integrative tests are usually direct.

Activity 3

  1. See if you can think of two further language tasks that involve integration of skills.

  2. Which linguistic elements are involved in your tasks?

One interesting variation on integrative tests was Oller's (1979) development of cloze tests (see section 14.5.1.1) which, along with C-tests (see section 14.5.1.2), represent exceptions to the direct-integrative and the indirect-discrete alignment, being essentially indirect tests that integrate a range of language skills and linguistic elements. In accordance with the so-called 'Unitary Competence Hypothesis' (cf 14.5.1), Oller believed that 'performance on a whole range of tests […] depended on the same underlying capacity in the learner - the ability to integrate grammatical, lexical, contextual and pragmatic knowledge in test performance' (Oller, 1979: 15). It was thought the cloze test was capable of measuring this competence. Although they are still quite widely used, it has been found, in fact, that cloze tests tend to measure much the same things as discrete tests of grammar and vocabulary.

Proponents of objective tests would argue that integrative tests can lead to unreliable marking since they frequently require a degree of subjective judgement on the part of the marker. However, detailed and thoroughly trialled criteria (see, for example, Appendices 1-3), allied to the standardizing procedures for teams of markers employed by organizations involved in large-scale testing, such as the secondary awarding bodies or The Open University, ensure a good degree of transparency and high reliability on direct tests involving speaking and writing. Student numbers in conventional university language assessment may not warrant such detailed procedures, but the same tried and tested criteria, thorough and agreed mark schemes, moderation of marks and/or second marking should ensure similar reliability.

Ultimately, the choice of direct versus indirect testing will be a pedagogical one. As noted in Module 13, section 13.1.3, all assessment should be closely related to learning outcomes and should have the effect of a positive washback on classroom learning and teaching (see Module 13, section 13.1.4.5). As Bailey concludes (1998: 83):

The crux of the matter for us as classroom teachers choosing between direct and indirect measures of a skill is to decide what we want to measure. Do we wish to focus very tightly on specific aspects of enabling skills? (This might be the case in a diagnostic test or a progress test.) If so, an indirect test might be useful. Or do we wish to assess our learners' abilities to use the macroskills in the target language? In that case, a more direct test would be called for.

As this suggests, the apparent direct-indirect and integrative-discrete dichotomy is no such thing. The terms should rather be seen as denoting points on a spectrum: depending on its purpose, a language test will be located closer to one end or the other, while a single examination might well feature test elements from various points on the spectrum.

Activity 4

  1. Choose TWO of the four skills of listening, speaking, reading and writing, and for each of them think about how you would assess students' ability through one direct and one indirect test.

  2. Consider also how you would mark each of the resulting four tests.

previous button
next button

contents button