Assessing Writing 46 (2020) 100484
Available online 1 October 2020
1075-2935/© 2020 Elsevier Inc. All rights reserved.
Assessing writing for workplace purposes: Risks, conundrums
and compromises
Since the development of cuneiform script in Mesopotamia for keeping tallies of grain and sheep, written language has been used to
document workplace transactions as a safeguard against the unreliability of human memory or deceit. Modern workplaces continue
this reliance on written language as a means of mitigating the risk that some important aspect of work will be missed, misunderstood,
misrepresented or contested. In most workplaces, a great deal of effort is expended on the documentation of actors, timelines and
processes. The sale of livestock nowadays demands accreditation and form-filling which hold the vendor liable for any misinformation
provided. In the business world, an email serves as a record from provider to client that something has been completed successfully on
a certain date and payment is due. In the medical world, a form requests a particular pathology test and certifies the identity of the
sample collector and the specimen provider. Written exchanges such as these are entrenched and unremarkable in modern workplaces.
Thus, it is not surprising that, especially for those who are non-native users of the language(s) in question, the skills to manage written
communication in workplaces and professional domains are assessed via language tests.
Languages for Specific Purposes (LSP) tests serve accreditation and ‘work readiness’ purposes where the consequences of someone’s
inadequate written skills pose some level of risk for the receiving domain. In a hospital context, an inaccurate written handover poses a
risk to patient safety. In a business context, an inappropriate tone in an email poses a risk to the client relationship and their future
dealings. Writing tests for specific purposes mitigate risks by giving a degree of assurance that domain entrants are equipped for the
communication demands of the job. However, risk mitigation is a multi-layered and value-laden exercise (Giddens, 1999; Knoch &
Macqueen, 2020). It is therefore worth asking whose risk is being mitigated through the use of a test and the exact nature of that risk. If
a test does not adequately represent the nature and extent of occupational writing demands, the wrong candidates may be selected or
excluded with potentially grave consequences for employers and accreditation bodies (poor workplace performance on the one hand
and unjustified workforce shortages on the other). Similarly, for test takers, the risks of inadequate measurement work in two di-
rections: unfair exclusion or inability to meet workplace requirements. These risks are compounded by the time and financial cost of
preparing for and taking a test, particularly if these efforts do little to advance test takers’ language skills for the workplace. Finally,
quality control measures carried out by test development agencies are at least partly motivated by attempts to head off reputational
and commercial risks.
In a general sense, the test development and validation process is itself an exercise in mitigating stakeholder risks. The goals we set
and the methods we use to carry out test research extend the already complex entanglement of risks, mitigation efforts and re-
sponsibilities that surround tests and their uses. Questions arise such as: Who should we consult in developing instruments and
validating their uses? What theories and methods offer the best basis for needs analyses? What framework allows us to judge whether
or not a test is fit for purpose? This special issue presents an array of projects that address questions such as these, ranging from domain
description to setting appropriate cut scores.
In LSP testing, it is generally understood that the trustworthiness of the score use lies in the strength of the connection between the
test construct and the target language use domain (Douglas, 2005). However, establishing this connection may be more complex than
it seems. First, in the domain of use we cannot assume a single ‘target language’, a predictable and stable set of task demands or a
homogenous group of users. Domain description is thus difficult and inevitably approximative. Second, tests themselves, and perhaps
especially writing tests, are powerful artifacts within ‘knowledge infrastructures’, defined by Edwards (2010) as ‘robust networks of
people, artifacts, and institutions that generate, share, and maintain specific knowledge about the human and natural worlds’ (p. 17).
These knowledge infrastructures encompass policy imperatives, institutional practices and various kinds of linguistic and cultural
Contents lists available at ScienceDirect
Assessing Writing
journal homepage: www.elsevier.com/locate/asw
https://doi.org/10.1016/j.asw.2020.100484