Recent Changes - Search:

CompFAQs Home


Basic Writing @ CompFAQs

Course (Re)Design

Teaching Strategies



edit SideBar

How Effective is Timed-essay Placement for Meeting Student Needs

How effective is timed-essay placement for meeting student needs?

“Where writing-placement systems work well, they protect the academic level of the course, support retention into the second year, and maintain and enrich faculty conversation about writing instruction. But they do not always work well.” (Haswell, “Post-secondary Entrance,” 2)

“Validity means honesty; the assessment is demonstrably measuring what it claims to measure. Reliability means consistency; the scores are demonstrably fair.” (White, 40)

Despite its prevalence and long use, impromptu-essay placement continues to be debated over in terms of validity, rater agreement and bias, test-design reliability, and social/political/cultural fairness. In his 2004 synopsis of placement practices (“Post-secondary Entry”), Haswell adds three more key issues crucial for placement research: “writer reliability,” or the degree to which a single piece of writing reflects a writer’s abilities; indirect measurement (such as multiple-choice tests), one answer to the high costs and low scorer reliability of direct assessment reliability; and predictability, the assessment’s correlation with later (basic) writer success in courses where they’re placed. Unfortunately, various studies have provided relatively varying degrees of clarity as to how well placement testing serves students’ needs, though they do show the many ways in which it can be ineffective.

What are some pitfalls of using timed essay assessment in assigning “basic writer” status?

Composition scholars have mapped out a minefield of potential problems institutions face when using timed essays for placement:

  • Different kinds of students with different backgrounds may be poorly served by an assessment method, where “bad assessment is what gets most students labeled as ‘basic writers’” (Hilgers, 69). Even carefully considered holistic scoring methods and portfolio assessment can lead to the failure of non-traditional students because of a failure in assessment (e.g., Harley and Cannon, 72).
  • ESL students invite particular problems when institutions test their writing abilities by typical methods, which might not truly determine their abilities at composing texts. Deborah Crusan notes that timed-essay testing can marginalize ESL students because many “have great difficulty producing fluent written [English] discourse, especially in timed situations” (21).
  • One-shot timed essays tend to hide a student’s ability to revise his or her writing, an ability that is critical for all good writers. For example, Ann Del Principe and Janine Graziano-King find that allowing students to revise their essays in a controlled environment may present a more authentic portrait of writing ability for many students, especially in terms of essay focus and elaboration.
  • In rating essay tests, a rater may have a tendency to devalue certain language as “bad writing,” Richard Haswell (“Dark Shadows”) notes that “incompetent” essays often have features that resemble “working-world” writing with features, such as a grasp of concrete language and compression of syntax, that would be favored in the workplace though perhaps not in the assessment (304, 311).

And finally, there’s a tendency in assessment practice only to compare essays, rather than to assess each student’s performance. “Diagnosis, good placement practice, looks through a placement essay in order to predict the student’s future performance, whereas pseudodiagnosis, poor placement practice, pretends to do that while actually only ranking the essay in comparison with other essays” (Haswell, “Post-secondary Entry”).

How do alternative assessment methods for first-year writing placement compare?

Since comp scholars generally agree that direct methods of assessing writing are better than indirect writing assessments (such as multiple-choice tests), impromptu essays fulfill this minimum requirement. However, some institutions continue to use indirect methods as their primary means for placement. These inexpensive, computer-mediated tests are meant to provide some picture of a student’s writing ability. Some examples include ASSET Writing Skills test put out by ACT, Inc., a 36-item test that provides students with three text samples and asks them to make choices in sentence structure, grammar, style, and organization. The Texas Higher Education Assessment (THEA) multiple-choice writing section test similarly consists of 16 passages and asks students to pick a few appropriate changes to each passage. But such tests lack opportunities for students to be evaluated on their production of texts.

Potfolio assessment of incoming students’ high-school writing has been used as means for first-year placement. This answers the need for direct writing evaluation, while also allowing a student to be judged by his or her revised work across a variety of genres and writing purposes. For example, the University of Michigan in the 1990s (Willard-Traub, et al.) attempted to replace its 50-minute placement essay with portfolio assessment. After the university started using portfolio evaluation, data showed that portfolio assessment placed about the same percentage of students into the (basic) writing practicum course as did timed-essay placement (roughly 12%) (52). Moreover, composition instructors were satisfied that about 80% of their students had been placed properly by portfolio assessment (though no data for the timed-essay period existed for comparison) (56). However, portfolio evaluation had showed itself to be more costly; thus, after a few years of development, Michigan reverted back to the lower-cost placement practice (82).

Independent of costs, an ideal assessment method would likely use multiple placement methods to get the best picture of a student’s writing ability and language practices—that is, educators “should implement multiple measures and validate with multiple measures” (Haswell, “Post-secondary Entry”). Moreover, an understanding of the context of the student-writer’s writing situation is key for making valid placement decisions. In terms of essay testing, this “validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests” (American Educational Research Association et al., quoted in Elliot, 268).

What are some current concerns about assessment for placement?

Composition community professionals and public- and private-sector groups (such as ETS) have taken positions on how or if impromptu essay tests are legitimate means of evaluating writing ability for placement or other purposes. Despite sometimes very public disagreements about the use of standardized testing (e.g., Elliot, 310), some agreement can be found between composition educators and non-teachers in their basic opposition to timed, unrevised essay tests and to multiple-choice/indirect tests, found when comparing the November 2006 CCCC’s postion statement on assessment to the business-influenced National Commission on Writing’s “Writing and School Reform” (containing the 2002 “Neglected “R”” report). Both documents do also emphasize the importance of human reader-evaluators of student writing, though the NCW’s document is open to the “exploration” of improved technologies for software that measures writing competence (69).

Groups of scholars are also concerned with how traditional writing assessment should be done in this new century. Along with the documents above, the November 2007 draft position statement on assessment of the Council of Writing Program Administrators emphasizes how assessment must account for the multiple contexts writers face when addressing the numerous needs of their 21st-century audience. The council implies that no longer can “pen-and-paper writing” be an adequate basis for assessing a student’s abilities.

From within their own field, writing program administrators (WPAs) and instructors are faced with sometimes extreme positions of prominent theorists about assessment. One example is Ira Shor’s labeling of placement testing at CUNY as “bogus” (102), since it stigmatizes non-traditional students as “basic writers” who inversely can do quite well in mainstream courses. Other scholars question whether student writing should (or can) be judged on how well it fits future employers’ notions of “good” writing, or any other single standard (Bruna et al. 74, 76).

More recently, scholars such as Brian Huot and Patricia Lynne have argued that the context-specific nature of writing presents extreme difficulties for anyone who attempts to assess a piece of writing. Huot also sees writing assessment “as social action,” shaping instruction itself, and “possibly enabling certain students while limiting others. It also helps to make clear that often assessments imposed from the outside have specific political agendas that are designed to profit certain groups” ((Re)articulating Writing Assessment, 175). This makes many standard assessment practices suspect in that they potentially shortchange students of various abilities and cultural backgrounds.

Accordingly, traditional assessment methods may not be able to claim reliability and validity. “Writing assessment procedures, as they have been traditionally constructed, are designed to produce reliable (that is, consistent) numerical scores of individual student papers from independent judges,” Huot states (“Toward a New Theory,” 549). By contrast, he and other theorists deny that good writing has “fixed,” a-contextual traits – thus, “the validity of a test must include a recognizable and supportable theoretical foundation as well as empirical data from the students’ work” (550). Thus Huot argues it is dangerous to equate reliability in testing (or statistical consistency in scoring) with “fairness” for students, since this reliability depends on the nature of the standard judgments being made in scoring (557). Rather, “Assessment practices need to be based upon the notion that we are attempting to assess a writer’s ability to communicate with a particular context and to a specific audience who needs to read this writing as part of a clearly defined communicative event” (559).

Patricia Lynne’s 2004 book on assessment calls for a “new theory” of assessment that will address issues of context and the nature of writing better than traditional “objectivist” assessment based on measurement. “Educational measurement principles, most often signified by these terms ‘validity’ and ‘reliability,’… [ignore] the complexities of written literacy valued by compositionists, including the influence and importance of communicative context, the collaborative and conversational aspects of the process of writing, and the social construction of meaning” (3). As a step in this direction, she urges compositionists to change their language about assessment, replacing the terms of “validity” and “reliability” (which are built on the idea that objective assessment of writing can legitimately be carried out) with “meaningfulness” and “ethics,” respectively (117).

Such reevaluations of writing assessment would tend to weaken arguments that favor the traditional use of impromptu essays for placement, while perhaps favoring other methods such as directed self-placement. Ultimately, whenever timed-essay testing for placement is used, alternative approaches to assessment should always be considered to balance practical concerns, such as cost-effectiveness, with the basic fairness of directing a student to the course level that will best meet his or her needs.

WORKS CITED

“ASSET Writing Skills Test.” <http://www.act.org/asset/tests/writing.html>
Attali, Yigel, and Jill Burstein. “Automated Essay Scoring with E-Rater V.2.” Journal of Technology, Learning, and Assessment 4.3 (2006): <http://escholarship.bc.edu/jtla/vol4/3/>
Bruna, Liza, Ian Marshall, Tim McCormack, Leo Parascondola, Wendy Ryden, and Carl Whithaus. “Assessing Our Assessments: A Collective Questioning of What Students Need—and Get.” Journal of Basic Writing 17.1 (1998): 73–95.
CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” (November 2006): <http://www.ncte.org/cccc/announcements/123784.htm>
“The City University of New York Skills Assessment Program.” <http://portal.cuny.edu/cms/id/cuny/documents/informationpage/002144.htm>
Council of Writing Program Administrators, “WPA Position Statement on Assessment – DRAFT.” (November 2007): <http://www.wpacouncil.org/AssessmentPosition>
Crusan, Deborah. “An Assessment of ESL Writing Placement Assessment.” Assessing Writing 8.1 (2002): 17–30.
Del Principe, Ann, and Janine Graziano-King. “When Timing Isn’t Everything: Resisting the Use of Timed Tests to Assess Writing Ability.” TETYC 35.5 (2008): 297–311.
Elliot, Norbert. On A Scale: A Social History of Writing Assessment. New York: Peter Lang, 2005.
Harley, Kay, and Sally I. Cannon. “Failure: The Student’s or the Assessment’s?” Journal of Basic Writing 15.1 (1996): 70–87.
Haswell, Richard H. “Dark Shadows: The Fate of Writers at the Bottom.” College Composition and Communication 39.3 (1988): 303–14.
Haswell, Richard H. “Post-secondary Entry Writing Placement: A Brief Synopsis of Research.” CompPile (2004): <http://comppile.tamucc.edu/writingplacementresearch.htm>
Haswell, Richard H. “Post-secondary Entrance Writing Placement.” CompPile (2005): <http://comppile.tamucc.edu/placement.htm>
Huot, Brian A. “Toward a New Theory of Writing Assessment.” College Composition and Communication 47.4 (1996): 549–566.
Huot, Brian A. (Re)articulating Writing Assessment for Teaching and Learning. Logan, Utah: Utah State University Press, 2002.
Hilgers, Thomas. “Basic Writing Curricula and Good Assessment Practices.” Journal of Basic Writing 14.2 (1995): 68–74.
Lynne, Patricia. Coming to Terms: A Theory of Writing Assessment. Logan, Utah: Utah State University Press, 2004.
National Commission on Writing. “Writing and School Reform.” (May 2006): <http://www.writingcommission.org/prod_downloads/writingcom/writing-school-reform-natl-comm-writing.pdf>
Shor, Ira. “Illegal Literacy.” Journal of Basic Writing 19.1 (2000): 100–12.
“THEA Practice Test Writing Section.” <http://www.thea.nesinc.com/practice.htm>
White, Edward M. “An Apologia for the Time Impromptu Essay Test.” College Composition and Communication 46.1 (1995): 30–45.
Willard-Traub, Margaret, Emily Decker, Rebecca Reed, and Jerome Johnston. “The Development of Large-Scale Portfolio Placement Assessment at the University of Michigan: 1992–1998.” Assessing Writing 6.1 (1999): 41–84.
Edit - History - Print - Recent Changes - Search
Page last modified on April 16, 2008, at 07:13 AM