Program evaluations are “individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working 1 .” What was your reaction to this definition? Has the prospect of undertaking a “research study” ever deterred you from conducting a program evaluation? Good news! Did you know that program evaluation is not the same as research and usually does not need to be as complicated?
In fact, evaluation is a process in which we all unconsciously engage to some degree or another on a daily, informal basis. How do you choose a pair of boots? Unconsciously you might consider criteria such as looks, how well the boots fit, how comfortable they are, and how appropriate they are for their particular use (walking long distances, navigating icy driveways, etc.).
Though we use the same techniques in evaluation and research and though both methods are equally systematic and rigorous (“exhaustive, thorough and accurate” 2 ), here are a few differences between evaluation and research:
The research aims to produce new knowledge within a field. Ideally, researchers design studies to be able to generalize findings to the whole population–every single individual within the group being studied. Evaluation only focuses on the particular program at hand. Evaluations may face added resource and time constraints.
Daniel L. Stufflebeam, Ph.D., a noted evaluator, captured it succinctly: “The purpose of the evaluation is to improve, not prove 3 .” In other words, research strives to establish that a particular factor caused a particular effect. For example, smoking causes lung cancer. The requirements to establish causation are very high. The goal of the evaluation, however, is to help improve a particular program. In order to improve a program, program evaluations get down-to-earth. They examine all the pieces required for successful program outcomes, including the practical inner workings of the program such as program activities.
Another prominent evaluator, Michael J. Scriven, Ph.D., notes that evaluation assigns value to a program while research seeks to be value-free 4 . Researchers collect data, present results, and then draw conclusions that expressly link to the empirical data. Evaluators add extra steps. They collect data, examine how the data lines up with previously-determined standards (also known as criteria or benchmarks) and determine the worth of the program. So while evaluators also make conclusions that must faithfully reflect the empirical data, they take the extra steps of comparing the program data to performance benchmarks and judging the value of the program. While this may seem to cast evaluators in the role of judge we must remember that evaluations determine the value of programs so they can help improve them.
Tom Chapel, MA, MBA, Chief Evaluation Officer at the Centers for Disease Control and Prevention (CDC) differentiates between evaluation and research on the basis of when they occur in relation to time:
Researchers must stand back and wait for the experiment to play out. To use the analogy of cultivating tomato plants, researchers ask, “How many tomatoes did we grow?” Evaluation, on the other hand, is a process unfolding “in real-time.” In addition to determining the number of tomatoes, evaluators also inquire about related areas like, “how much watering and weeding is taking place?” “Are there nematodes on the plants?” If evaluators realize that activities are insufficient, staff are free to adjust accordingly. 5
To summarize, evaluation: 1) focuses on programs vs. populations, 2) improves vs. proves, 3) determines value vs. stays value-free and 4) happens in real-time. In light of these 4 points, evaluations, when carried out properly, have great potential to be very relevant and useful for program-related decision-making. How do you feel?
For more resources, see our Library topic Nonprofit Capacity Building.