Recent Changes - Search:

CompFAQs Home


Basic Writing @ CompFAQs

Course (Re)Design

Teaching Strategies



edit SideBar

How Are Impromptu or Timed Essays Used to Provide Information for Initial Placement

How are impromptu/timed essays used to provide information for initial placement?

The practice of using impromptu/timed essays for first-year college composition placement varies somewhat across two- and four-year institutions, but it generally continues to be popular. Even though major questions arose during the 1990s about the legitimacy of this kind of assessment/placement method (White, 30), institutions still find many reasons for using a single unrevised block of student writing, performed during a 20–60 minute period, as the basis on which decisions can be made for placing the student. One reason is that impromptu essay testing is a direct measure of writing ability, theoretically making it more valid than indirect methods such as multiple-choice testing. And essay evaluation methods can be made, to some degree, reliable (consistently scored, and fair to the student)-- though this is not an easy task, and the reliability is not always high.

Despite the rising interest in portfolio assessment as a more accurate way of determining writing ability, Edward M. White in 1995 argued “the time has come for portfolio advocates… to recognize the important role essay testing has played in the past—and can still play—and to stop attacking essay testing as an unmitigated evil…” (31). Some might choose to look on such timed essays as an “abbreviated portfolio,” to use White’s words (32), which provides enough information for entrance placement and is clearly preferable to multiple-choice testing.

Most impromptu testing involves providing a writing prompt to incoming students, who produce essays that are then scored in some way that allows placement decisions to be made. But there are variations in how and when impromptu essay testing for placement is used by different institutions. Below are just a handful of the practices documented by Haswell (“Post-secondary Entrance,” 5–6):

  • The placement essay is given in the third week of the composition course, after which “at-risk” writers are given extra instructor support in the course.
  • The placement essay is run as a “class” for entering students during summer orientation, allowing time for instructor feedback and discussion.
  • After the placement essay is used to place students, students are allowed to study the criteria the raters used, “turning a testing process into a learning one.”
  • The institution adds a second essay to create an “abridged portfolio” that is used for placement decisions.

Such variations allow institutions to maximize the validity and reliability of the use of placement essays.

How are timed placement essays scored or rated?

Because of the inherent difficulties of using a single piece of unrevised writing for defining a student as a basic writer or not (while minimizing the costs for making this assessment), the rating or scoring methods for placement essays also varies by institution. Essays might be evaluated by English teachers who are composition/BW instructors or other English instructors, or by outside firms or agencies (“Post-secondary Entrance,” 2).

Placement teams generally seek evaluation methods which are consistent, grade-to-grader, for reliability. Holistic or “general impression” scoring allows graders to evaluate multiple attributes of writing, rather than just focusing on a few salient mechanical features. Essay scoring on a scale (typically 1 to 6 or 1 to 4) provides scoring efficiency and allows the opportunity for re-grading of an essay when graders initially disagree. The 60-minute placement essay required by the Skills Assessment Program of the City University of New York (CUNY) is one such example of a placement essay practice using six-point scoring by two “trained readers,” though some scholars doubt whether even well-known writing assessments are appropriate for placement (e.g., Bruna et al., 77).

As an alternative low-cost assessment method (of debated propriety), computer-program essay scoring has been used by some institutions. Composition specialists generally reject such methods because the software is seen as focusing too much on certain mechanical features of writing such as grammar and word-choice—rather than on other critical issues of student writing that have to do with content and argument. The 2006 book Machine Scoring of Student Essays: Truth and Consequences (Ericsson & Haswell, eds.) provides several academics’ research on the problems of computerized essay scoring. As through this collection of essays, composition scholars are now responding to claims made about programs such as e-rater, which are presented through publications typically written by the providers of the product (and even some of these writers agree that automated essay scoring does not assess all important qualities of writing and have other disadvantages (e.g., Attali & Burstein, 13, 24)).

Debates will likely continue about essay scoring, and about which how much the different methods overemphasize superficial writing standards and mechanics. Even holistic human scoring has been criticized as “forcing agreement to inappropriate standards by tyrannical methods” (White, 30).

WORKS CITED

Attali, Yigel, and Jill Burstein. “Automated Essay Scoring with E-Rater V.2.” Journal of Technology, Learning, and Assessment 4.3 (2006): <http://escholarship.bc.edu/jtla/vol4/3/>
Bruna, Liza, Ian Marshall, Tim McCormack, Leo Parascondola, Wendy Ryden, and Carl Whithaus. “Assessing Our Assessments: A Collective Questioning of What Students Need—and Get.” Journal of Basic Writing 17.1 (1998): 73–95.
“The City University of New York Skills Assessment Program.” <http://portal.cuny.edu/cms/id/cuny/documents/informationpage/002144.htm>
Haswell, Richard H. “Post-secondary Entrance Writing Placement.” CompPile (2005): <http://comppile.tamucc.edu/placement.htm>
Machine Scoring of Student Essays: Truth and Consequences. Edited by Patricia Freitag Eriscsson and Richard Haswell. Logan, UT: Utah State University Press, 2006.
White, Edward M. “An Apologia for the Time Impromptu Essay Test.” College Composition and Communication 46.1 (1995): 30–45.
Edit - History - Print - Recent Changes - Search
Page last modified on April 16, 2008, at 07:12 AM