Recent Changes - Search:

CompFAQs Home

CBW Home (new window)

Basic Writing @ CompFAQs

Feedback and Basic Writing

Technology Tools

Generation 1.5 Students

Teaching Basic Writing

Assess & Respond

Course (Re)Design

Teaching Strategies

Basic Writing and Service-Learning

Teaching Reading in Basic Writing

ELL Student Writing

Trends Shown in the CBW Survey of Basic Writing Programs

Basic Writing Resources

Best Practices Home

Personal Writing

Collaborative Practices
Course Credit
Theme-Based Courses

BW Teacher Reading List

BW Grad Syllabi Home

edit SideBar

Program Assessment

Designing Courses

How Should the Effectiveness of Basic Writing Courses or Programs be Assessed?

by Alison Bennett

The need for assessment design

Assessment measures are a vital part of course design. Most obviously, program administrators will need feedback from assessment to make informed decisions about the curriculum. Just as importantly, assessment measures can provide valuable information for institutional policy makers when funding issues need resolution. When writing program designers fail to build in assessment methodologies, they leave a gap for college administrators or legislators to fill (Woolcott). As different assessment measures can render different kinds of information, it’s important to carefully consider the relationship between desired outcomes and how they will be measured.

Program goals

Defining measurable goals is essential to meaningful program assessment(White, Gleason, and Yancey). In some instances, goals for basic writing are mandated by external forces, such as state or federal governing bodies. However, the most meaningful outcomes are those endorsed by the majority of stakeholders in basic writing courses (Gleason, Slevin). Stakeholders are everyone— students, faculty, administrators, employers, etc.--with an interest in the basic writing curriculum. Goals of basic writing programs and how they are set are rooted in the philosophy upon which the program was designed. It can therefore be said that academic acceptance of basic writing programs is depended to a great extent upon how well stakeholders understand what the program wants to accomplish and why. The demonstrated power of stakeholders suggest the importance of exploring expedient ways to include them in outcomes design.

Positions on Writing Goals from some Professional Organizations

Writing Assessment: A Position Statement from the Conference on College Composition and Communication

Outcomes Statement for First-Year Composition from the Council of Writing Program Administrators

Assessment Strategies

Assessment methodologies are shaped by time-frame, available resources, and purpose(Yancey). Because of the diverse of needs of various stakeholders in basic writing programs, multiple assessment methods are recommended. Formative measures focus on the process of the basic writing program. They are used to make ongoing decisions regarding how pedagogy or curriculum contribute to the goals of the program. Summative measures are product oriented, and are generally used to evaluate the program’s quality for external audiences.

Formative Assessments

Formative evaluations determine how well the factors that drive the program, such as instructional practices, assignments, and staff conception of the program mission, support the authorized goals.

Some examples of formative assessments would include:

  • in-service staff instructional and information sharing sessions
  • consultations with external authorities
  • tracking students’ progress as they write for other classes
  • student interviews

More than merely rating the quality of student work, measures such as these can provide information necessary to making informed decisions about teaching practices and curriculum. Therefore, stakeholders who find the most value in formative evaluation are those most directly involved with program functionality, such as faculty and program administrators. Some kinds of formative assessments are conducted concurrently with the course of study. Others happen at conclusive points such as at the midterm mark. Normally, specific information regarding what students learn and how they demonstrate that knowledge falls beyond the scope of formative evaluation and into the sphere of summative assessment. Yet, as Gleason noted, formative evaluations can exert unanticipated and significant influences upon summative methods.

Summative Assessments:

Summative measures focus upon the products of a writing program as a way to measure its effectiveness. They are designed to find out what students have learned and the degree of proficiency with which they can apply it.

Examples of summative assessments tools are:

  • timed essays
  • standardized tests
  • portfolio evaluations
  • individual writing assignments

While summative assessment is more conducive to judging achievement at the student level, it’s possible to draw conclusions about program design from summative measures when assessments concentrate upon how products demonstrate ways students met or missed the goals of the program. When given at designated intervals, summative assessments like those mentioned can be useful in tracking students’ progress over the duration of the course.

Are multi-modal assessments necessary?

The expectations of basic writing instruction held by various stakeholders makes a case for formulating a variety of assessment methods (Yancey, Gleason, Goto). From a pedagogical viewpoint, assessments that give qualitative information are frequently most useful. Teachers and department heads need feedback that will help them adjust instruction and content in ways that serve the purposes of the course. But other stakeholders, such as policy makers, legislators, and future employers need quantitative information that can help them evaluate efficacy to make decisions which could impact the program’s future; funding decisions are a good example (Goto). Assessment approaches that can show the program’s worth in relation to the continuing educational experiences of basic writing students are most useful for addressing the questions of stakeholders who set institutional policy.

Some possible areas for measurement are:

  • institutional retention of basic writing students as compared to non-basic writing students
  • academic progress of basic writing students as measure by classification and GPA
  • highest level of composition class sequence taken by basic writing students compared to non-basic writing students.

Such assessments are generally most cost efficient and expedient when the designer utilizes resources already available (Yancey). Student records and transcripts can provide the academic data needed. In many situations, however, scholastic success of basic writers is influenced by factors beyond the academy, such as the necessity of full- time employment. Therefore it may be advisable to incorporate socioeconomic data when available.

Further discussion about assessment:

Online bibliography of assessment publications

Yancy, Kathleen Blake. “Outcomes Assessment and Basic Writing: What Why, And How?” Basic Writing e-Journal 1.1 (summer 1999).


Baker, Tracy, and Peggy Jolly. “ The ‘Hard Evidence’: Documenting the Effectiveness of a Basi Writing Program.” Journal of Basic Writing 18.1 (1999): 27–39.

Gilyard, Keith. “Basic Writing, Cost Effectiveness and Ideology.” Journal of Basic Writing 19.1 (2000): 36–42.

Gleason, Barbara. “Evaluating Writing Programs in Real Time: The Politics of Remediation.” College Composition and Communication 51.4 (Jun., 2000): 560–88.

Goto, Stanford T. “Basic Writing and Policy Reform: Why We Keep Talking Past Each Other.” Journal of Basic Writing 21.2 (2001): 1–20.

Haswell, Richard H. “Dark Shadows: The Fate of Writers at the Bottom.” College Composition and Communication 39.3 (Oct., 1988): 303–15.

Slevin, James F. “Engaging Intellectual Work: The Faculty’s Role in Assessment.” College English 63.3 (Jan., 2001): 288–305.

White, Edward M. “The Scoring of Writing Portfolios: Phase 2.” College Composition and Communication 56.4 (Jun., 2005): 581–99

Wiener, Harvey. S. “The Attack on Basic Writing-and-After.” Journal of Basic Writing 17.1 (1998): 96–103.

Woolcott, Wilma. “Evaluating a Basic Writing Program.” Journal of Basic Writing 15.1 (1996): 57–69

Edit - History - Print - Recent Changes - Search
Page last modified on April 26, 2006, at 11:08 PM