Lawrence erlbaum 2003 handbook of distance education 4

Page 1

574

THOMPSON AND IRELE

Reeves (1995) has developed a wide range of evaluation tools, including an anecdotal record form that collects the “human story,” an expert review checklist for gathering feedback on particular aspects of a program, a focus group protocol, an implementation log for collecting information on the actual versus planned use of program features, and a user-interface rating form for assessing the design of an interactive instructional product’s interface with the student. His “evaluation matrix” is an integrating tool that helps evaluators assess the advantages of each individual tool listed above for answering particular evaluation questions. Evaluation Reports Just as evaluators make choices about the process of evaluation, they also need to make decisions about the final product of the evaluation activity: the evaluation report. The Program Evaluation Standards (Joint Committee on Standards for Educational Evaluation, 1994) suggests criteria relating to several aspects of evaluation reporting: Report clarity. Evaluation reports should clearly describe the program being evaluated, including context, purposes, procedures, and findings so that essential information is provided and easily understood. Report timeliness and dissemination. Significant interim findings should be disseminated to allow timely use. Disclosure of findings. The full set of evaluation findings, along with a description of pertinent limitations, should be made accessible to the persons affected by the evaluation and those with expressed legal rights to receive the results. Impartial reporting. Reporting procedures should fairly reflect the findings, avoiding distortion caused by personal feelings or biases. The American Evaluation Association’s (1995) “Guiding Principles for Evaluators” further notes that, although “evaluations often will negatively affect the interests of some stakeholders, evaluators should . . . communicate [their] results in a way that clearly respects stakeholders’ dignity and self-worth” (p. 7). Reeves (1995) notes that the long reports written by many evaluators are seldom read. To increase the likelihood that an evaluation report will ultimately have an impact on institutional decision-making, he suggests formatting the report in easy-to-consume “chunks” of information in four sections: (1) an attention-getting headline, (2) a description of the major issues related to the headline, (3) a presentation of data related to the issues, and (4) a bottom-line recommendation or summary of the findings. Comprehensive Evaluation Models A number of sources offer useful and comprehensive guides to evaluation. In particular, two resources based on traditional education provide extensive advice that is applicable to distance education as well. These are described in the following section. Models from Traditional Education Foundational Models for 21st Century Program Evaluation (Stufflebeam, 1999) builds on decades of evaluation theory and professional practice, including the work of evaluation luminaries such as Tyler, Cronbach, Scriven, Guba, and Lincoln. In the book Stufflebeam reviews 22 alternative evaluation approaches, sorting them into those that are “best to take along” into the 21st century and those that “would best be left behind” (p. 1). The 20 “keepers” are sorted into three broad categories: questions/methods-oriented evaluation approaches,


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.