Timed Writing Assessment as a Measure of Writing Ability: A Qualitative Study
On the basis of these observations, one could justifiably conclude that timed writing of the sort used on standardized tests is not equivalent to first-draft writing in almost any other setting. In this way, one can recognize how the information collected in this study might begin to confirm the statement of Luna, Solsken, and Kutz that “a standardized test represents a particular, situated literacy practice” with its own distinctive norms and conventions (282). As Dr. Rinard first suggested, however, if timed writing is understood as comprising its own genre, the genre of standardized assessment, the claim that standardized essay tests provide a valid measurement of writing skill becomes suspect.
Indeed, Brian Huot criticizes the reliance of traditional testing practices on a “positivist epistemology” assuming that writing ability is a fixed trait that can be measured independently of context (549-52). Articulating a new theory of writing assessment, Huot argues instead that any acceptable measurement of writing ability must be informed by a clear conception of the sociocultural environment and academic discipline in which it is applied (559-64). As he contends, a valid assessment instrument must be designed to generate a rhetorical situation consonant with the purposes for which the assessment results will be used (560), and on this criterion, large-scale timed essay tests appear markedly deficient, precisely because they are standardized across a vast array of institutions and disciplines.
One might object to this line of reasoning, nevertheless, on the grounds that standardized essay examinations offer the greatest validity among all the forms of writing evaluation that many institutions have sufficient resources to employ. Alternatively, one might even acknowledge a complete dis-analogy between timed writing and the type of writing required by regular papers, yet maintain, as Dr. Schwegman proposed, that writing at speed might constitute an independently valuable and significant form of writing in its own right. At this juncture, however, the analysis of the validity of timed writing assessment must confront the issue of the social values that are communicated to students and the larger educational community by the choice of a particular assessment technique. For as White perceptively notes, “Every assessment defines its subject and establishes values” (37): each method of judging student achievement necessarily contributes to the delineation of the knowledge and capacities in which the subject of assessment consists. Furthermore, an assessment simultaneously conveys and reinforces a society’s normative commitment to a particular conception of what distinguishes greater and lesser ability in the relevant subject and of how proficiency in the subject can be gained or improved. Hence, in the words of John Eggleston, examinations may be considered “instruments of social control,” by the fact that “the examination syllabus, and the student’s capacity to respond to it, becomes a major identification of what counts as knowledge” (22). Lest this claim seem excessively abstract as a basis for scrutinizing the legitimacy of timed writing assessment, David Boud enumerates some concrete effects of evaluation methods on the educational process. Research has shown, he reports, that students concentrate on the topics that are assessed as opposed to other aspects of a course, that the types of tasks involved in the assessment influence their learning strategies, and that effective students watch carefully for instructors’ indications of what material will be tested (103-4).Once alert to the symbolic power exerted by assessment mechanisms, one might be troubled by some of the values and ideals that timed essay examinations seem to be propagating in the experience of the interviewees. Dr. Hsu remarked that the timed writing environment detracts from the significance of the “invention” process by which students discover and refine new ideas through revision. For Dr. Slatin, furthermore, timed essay tests entirely omit any emphasis on the value of creativity as an element of successful writing, replacing it with an unyielding focus on the “scientific” attitudes of analysis and criticism. As Dr. Schwegman commented, the essay examination format signals the importance of content knowledge at the expense of practicing skills, while Dr. Clayton observed that essay tests frame students’ writing as a response to a predetermined question rather than an avenue for exploring questions of their own devising. Finally, adopting the most critical stance of any of the instructors, Dr. Rinard explained that high-stakes timed writing examinations underline above all else the value of speed, in stark contrast to the ideal of thoughtful contemplation historically associated with effective writing. In her opinion, timed writing assessments test performance instead of revealing a student’s potential and encourage a “reductive” and formulaic mode of writing that prevents the development of nuanced points of view in a composition. Except in Dr. Rinard’s case, these features of timed writing assessment were not necessarily considered negative; they were mentioned as factors supporting the capacity of an examination to fulfill its purpose of testing knowledge. Nevertheless, if essay tests are employed to measure writing ability in particular, as with the SAT, then the fact that timed writing rewards qualities such as speed might become problematic when considering that these qualities could be mistakenly assumed to be definitive of writing skill in general, outside of the testing context.
Advertisement
To illustrate the manner in which the construct of writing ability peculiar to timed writing assessment might begin to insinuate itself into broader conceptions of writing as a practice, one can turn to the theory of orders of simulacra, developed by the sociologist Jean Baudrillard and applied to the field of educational testing by F. Allan Hanson. This theory, as Hanson writes, describes three ways in which a signifier, such as the result of a test, can represent the object that is signified, such as the underlying skill or capability of which the test gives an indication (68). At the first order of simulacra, the signified is conceived as prior to the signifier, which reproduces or resembles it in some way (68), just as an archaeological artifact precedes the copy placed in a museum, which is judged valuable insofar as it faithfully imitates the original. At the second order, the signifier serves as the “functional equivalent” of the signified, with Hanson’s example being the robotic machinery that replaces human workers, the signified, in a factory (68). In the final stage of this progression, at the third order, the signifier is a formula or blueprint for the signified and holds priority over it, just as DNA encodes the attributes of an organism and guides its development (68). Although tests are often understood as simple measurements of preexisting characteristics in the subject, operating at the first order of simulacra, Hanson argues that they commonly act as second-order signifiers, as when a test score substitutes for an individual’s intelligence or ability in college admission decisions (68-71). Advancing to the level of third-order signifiers, tests can “literally construct human traits,” he asserts, by altering the course of a person’s educational experience and even by incentivizing students to cultivate the cognitive characteristics favored by standardized examinations (71-74).
Returning to the topic of timed writing specifically, one could contend that an essay test’s ascription of certain degrees of skill to examinees assumes the function of a second-order signifier as students and teachers begin to conceptualize writing ability in terms of the values that the test is perceived as communicating. A writing examination approaches the third order of simulacra when the widespread adoption of the system of values defining writing skill from the perspective of the test precipitates tangible changes in the modes of writing within a community. Indeed, evidence for this shift can be uncovered: Dr. Clayton related that students would occasionally seem to be composing their regular papers in the style that they were accustomed to use for examinations, with deleterious effects on the quality of those papers. As she stated,
I do find that I think students are having more problems with traditional writing assignments than in the past because they are relying more upon what they’ve been taught, and I’ve had to say to students, “Do not treat this paper assignment as though it were an exam.” So I find that ... if anything their exams are better, but their papers worse, because I think... they’re confusing the two things.
These effects are aggravated if timed writing examinations are meant to provide an exact indication of a student’s writing ability instead of merely ascertaining basic proficiency, especially considering that essay tests offer only a rough estimate of ability. In her book on the social history of educational assessment, Patricia Broadfoot observes that assessments fulfill the distinct functions of selecting candidates for excellence, on the one hand, and of certifying the possession of essential competencies on the other (26- 33). Meanwhile, Eggleston discusses the social processes by which examinations contribute to determining the level of esteem granted to a given body of knowledge, and by which different disciplines compete for the validation of their own expertise as high-status (25-31). Synthesizing these concepts, one can understand how the role of a certain assessment in selecting for excellence rather than certifying basic competency might grant privileged status to the qualities and values that are publicly perceived as enabling success on that assessment. Such a role is in fact occupied by the SAT and AP examinations in the admissions systems of elite universities, greatly amplifying the capacity of timed writing assessment to influence the complex of social values attached to the concept of writing ability.
What this investigation has found, then, is that the timed essay examination, as an “assessment genre,” tests a particular species of writing ability distinguishable from the sort of skill demonstrated by the writing of longer papers and consequently disseminates a different set of values and a different understanding of writing as a practice. Especially when timed writing is employed for the specific purpose of revealing fine distinctions among individuals in the upper range of writing skill, the conception of writing ability constructed by timed writing assessment may even begin to supplant the social values undergirding traditional academic composition. Thus, in electing to use timed writing assessment as a measure of writing ability, instructors and administrators should take care to consider the potential consequences for the culture of writing among their students and to recognize that the representation of student abilities offered by such an assessment may not be fully generalizable to other contexts. Otherwise, the results of this study suggest, they may be inadvertently encouraging a reductive mode of writing and elevating the importance of speed at the expense of thoughtfulness and creativity.
Acknowledgements
This paper was written for a class taught by Prof. John Lee. I gratefully acknowledge his support and advice throughout the course of my research. I would also like to thank my interviewees, without whose amicable participation and insightful contributions this project could not have been completed: Dr. Brenda Rinard, Dr. Roland Hsu, Dr. Barbara Clayton, Dr. Patricia Slatin, and Dr. Jeffrey Schwegman.
References
Albertson, Kathy, and Mary Marwitz. “The Silent Scream: Students Negotiated Timed Writing Assessments.” Teaching English in the Two Year College 29 (2001): 144-153. 6 May 2012 .
Boud, David. “Assessment and the Promotion of Academic Values.” Studies in Higher Education 15 (1990): 101-11. 6 May 2012 .
Broadfoot, Patricia M. Education, Assessment, and Society. Buckingham, Eng.: Open University P, 1996.
Cho, Yeonsuk. “Assessing Writing: Are We Bound by Only One Method?” Assessing Writing 8 (2003): 165-91. 6 May 2012 .
Clayton, Barbara. Personal interview. 2 May 2012.
Advertisement
Cooper, Peter L. “The Assessment of Writing Ability: A Review of Research.” Educational Testing Service Research Report 84-12. May 1984. 6 May 2012 .
Del Principe, Ann, and Janine Graziano-King. “When Timing Isn’t Everything: Resisting the Use of Timed Tests to Assess Writing Ability.” Teaching English in the Two Year College 35 (2008): 297-311. 15 Apr. 2012 .
Eggleston, John. “School Examinations--Some Sociological Issues.” Selection, Certification, and Control: Social Issues in Educational Assessment. Ed. Patricia Broadfoot. London: Falmer, 1984. 17-34.
“English Literature: The Exam.” 2012. College Board. 6 May 2012 .
Freedman, Sarah Warshauer. Evaluating Writing: Linking Large- Scale Testing and Classroom Assessment. Berkeley, CA: National Center for the Study of Writing, 1991.
Hanson, F. Allan. “How Tests Create What They Are Intended to Measure.” Assessment: Social Practice and Social Product. Ed. Ann Filer. London: Routledge Falmer, 2000. 67-81.
Hass, Nancy. “The Writing Section? Relax.” New York Times 5 Nov. 2006. Proquest Historical Newspapers. 15 Apr. 2012 .
Hsu, Roland. Personal Interview. 1 May 2012.
Huot, Brian. “Toward a New Theory of Writing Assessment.” College Composition and Communication 47 (1996): 549-566. JSTOR. 15 Apr. 2012 .
Isaacs, Emily, and Sean A. Molloy. “Texts of Our Institutional Lives: SATs for Writing Placement: A Critique and Counterproposal.” College English 72 (2010): 518-38. Proquest Research Library. 6 May 2012 .
Kobrin, Jennifer L., et al. “Validity of the SAT for Predicting First-Year College Grade Point Average.” College Board Research Report 2008-5. 2008. 15 Apr. 2012 .
Lederman, Marie Jean. “Why Test?” Writing Assessment: Issues and Strategies. Ed. Karen L. Greenberg, Harvey S. Wiener, and Richard A. Donovan. New York: Longman, 1986. 35-43.
Luna, Catherine, Judith Solsken, and Eleanor Kutz. “Defining Literacy: Lessons from High-Stakes Teacher Testing.” Journal of Teacher Education 51 (2000): 276-88. Sage Journals. 6 May 2012 .
Moss, Pamela A. “Can There Be Validity Without Reliability?” Educational Researcher 23 (1994): 5-12. JSTOR. 6 May 2012 .
Murphy, Sandra. “Some Consequences of Writing Assessment.” Balancing Dilemmas in Assessment and Learning in Contemporary Education. Ed. Anton Havnes and Liz McDowell. New York: Routledge, 2008. 33-49.
Rinard, Brenda. Telephone Interview. 28 Apr. 2012.
Ruth, Leo, and Sandra Murphy. Designing Tasks for the Assessment of Writing. Norwood, NJ: Ablex, 1988. “SAT Test Sections.” 2012. College Board. 6 May 2012 .
Schwegman, Jeffrey. Personal Interview. 4 May 2012. Slatin, Patricia. Personal Interview. 3 May 2012.
“The ACT Plus Writing.” 2012. ACT, Inc. 6 May 2012 . White, Edward M. “An Apologia for the Timed Impromptu Essay Test.” College Composition and Communication 46 (1995): 30- 45. JSTOR. 15 Apr. 2012 .
Yancey, Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing Assessment.” College Composition and Communication 50 (1999): 483-503. JSTOR. 15 Apr. 2012 .
Endnote
1. This was not one of the standard questions posed to all interviewees, but one that occurred as the conversations progressed. All the other instructors either were not asked for their opinion on this subject or did not oppose the position that timed writing assessment would yield only a somewhat crude method of determining skill levels.
Appendix
In the interviews conducted for this project, the course of the conversation and the phrasing of the questions varied in each instance, but all the instructors were asked a series of five basic questions modeled on the following.
- How accurately, in your experience, does timed writing assessment reflect students’ broader academic writing ability? Does the timed assessment environment emphasize certain aspects of writing skill at the expense of others?
- What effects does the presence of timed writing assessment in a course have on your own instructional techniques? Do you recognize any influences on student writing patterns from the prevalence of timed writing assessment throughout high school and college?
- What factors motivate you to employ timed writing assignments in place of, or in addition to, regular papers? To what extent do practical considerations such as plagiarism concerns or grading time affect the decision to use timed writing assessment?
- What social values and attitudes toward writing, and communication in general, are projected by the importance of timed writing assessment in education?
- Would you consider the assessment environment of timed writing to be more or less fair, or equitable, in comparison to the evaluation of regular papers, given that timed writing assessment ensures that exactly the same resources and amount of time are available to each student?