The Effect of Task Characteristics on Mastery/Non-mastery Decisions Cover Image

The Effect of Task Characteristics on Mastery/Non-mastery Decisions
The Effect of Task Characteristics on Mastery/Non-mastery Decisions

Author(s): Wojciech Malec
Subject(s): Language and Literature Studies
Published by: Towarzystwo Naukowe KUL & Katolicki Uniwersytet Lubelski Jana Pawła II
Keywords: criterion-referenced measurement; method effect; task characteristics (test method facets); item format; classification decisions; collocation testing; pomiar sprawdzający; efekt metody; aspekty metody testowania; forma zadania; decyzje klasyfikacyjne

Summary/Abstract: Whatever their particular purposes, language tests are, broadly speaking, constructed and administered with a view to assessing (an aspect of) language ability. Inferences about the ability being measured as well as classification decisions are made on the basis of test takers’ scores. However, test score variance is never solely and directly due to variations in language ability. A variety of confounding factors, both external and internal can affect test performance. An important source of variance that is not associated with language ability is the method of testing. Test method is a general term used to refer to the testing procedure as a whole and as such can be viewed and examined in its entirety. However, within the framework of test method facets, or task characteristics, various aspects of the testing procedure can be delineated and analyzed separately – a researcher can focus primarily on only one of the test method facets. This can be, for example, the format of the test items and the way it impacts on the difficulty of the test. It is important to note that test method effects can be of two main kinds. They can be manifested either in different rank orders of the test takers (which means that the methods do not measure the same construct), or in different mastery/non-mastery classifications (which means that the methods impact on test difficulty). It might be claimed that the two kinds of effect are not equally relevant to both norm-referenced testing (NRT) and criterionreferenced testing (CRT). In this approach to testing, it is only an individual’s relative standing with respect to the other test takers that matters: as long as there is a significant correlation between the scores obtained from the different test methods, the difference between them can be regarded as negligible. From the perspective of CRT, by contrast, while correlations are also important for construct validation, the difficulty associated with a given test method is of paramount importance. Unless we recognize this kind of effect and learn to understand it, we will not be able to correctly interpret CR test scores. In other words, we will not be able to ascertain whether students do not meet the criterion for mastery because they have not mastered the content domain of the test or because the cognitive demand of the test tasks is very high. Gaining a better understanding of how item format can impact on the difficulty of a criterion-referenced progress test was the underlying rationale for the empirical study reported in the sections to follow. The measurement instruments were intended to assesses knowledge of collocations, a test construct that is conspicuously under-researched in the measurement literature. Because of that, one of the purposes of the study was the development of effective measures of collocational knowledge, mainly through quantitative item analysis and through an examination of the tests’ validity and reliability.

  • Issue Year: 56/2008
  • Issue No: 05
  • Page Range: 93-113
  • Page Count: 21
  • Language: English