Search form

Section 1. Choosing Questions and Planning the Evaluation

Learn how to decide exactly what to evaluate and steps you'll need to take to design, implement, and use the evaluation.

 

  • What do we mean by choosing evaluation questions?

  • Why is it necessary to choose evaluation questions carefully?

  • When should you choose questions and plan the evaluation?

  • Who should be involved in the process?

  • How do you choose questions and plan the evaluation?

Chapters 36-39 of the Community Tool Box concern the evaluation of community programs.  We've chosen to devote so much space to evaluation because it's one of the most important parts of any effort to improve community life and bring about lasting social change.  It can help you to better understand how well the planning and preparations for your program went, whether you implemented it as you meant to, and what the consequences were.  It can tell you whether you met your original objectives and goals or not, and give you information about what you need to change to be more effective.

Chapter 36 explains how an evaluation works, and gives some guidance as to how to develop it. Chapter 38 will assist you in gathering and analyzing the information you want, and Chapter 39 deals with how to use that information to improve your program and garner community and funding support. Here, in Chapter 37, we look at evaluation from a research point of view - how to plan and structure an evaluation that can help you better understand and improve what you do.

In this section, we'll discuss the first, and perhaps most important, step in evaluation research: deciding exactly what to evaluate. Each of the rest of the sections in the chapter will deal in detail with one of the steps you'll need to take to design, implement, and use the evaluation. The goal of the chapter is to provide guidelines that are useful to grassroots or community-based organizations as well as students or academic researchers.

What do we mean by choosing questions, and why is it necessary?

Every evaluation, like any other research, starts with one or more questions. Sometimes, the questions are simple and easy to answer. (Will we serve something close to the 50 people we expect to?) Often, however, the questions can be complex, and the answers less easy to find. (Which, or which combination, of the three parts of our intervention will affect which of the two behavior changes we seek within participants?) The questions you ask will guide not only your evaluation, but your program as well. By your choice of questions, you're defining what it is you're trying to change.

You choose your evaluation questions by analyzing the community problem or issue you're addressing, and deciding how you want to affect it. Why do you want to ask this particular question in relation to your evaluation? What is it about the issue that is the most pressing to change? What indicators will tell you whether that change is taking place?  Is that all you're concerned with? The answer to each of these and other questions helps to define what it is you're trying to do, and, by extension, how you'll try to do it.

For example, what's the real goal of a program to introduce healthier foods in school lunches? It could be simply to convince children to eat more fruits, vegetables, and whole grains. It could be to get them to eat less junk food. It could be to encourage weight loss in kids who are overweight or obese.  It could be to educate them about healthy eating, and to persuade them to be more adventurous eaters.

The evaluation questions you ask both reflect and determine your goals for the program. If you don't measure weight loss, for instance, then clearly that's not what you're aiming for. If you only look at an increase in children's consumption of healthy foods, you're ignoring the fact that if they don't cut down on something else (junk food, for instance), they'll simply gain weight.  Is that still better than not eating the healthy foods? You answer that question by what you choose to examine - if it is better, you may not care what else the children are eating; if it's not, then you will care.

Things to consider when choosing evaluation questions

What do you want to know?

Academics and other researchers may approach choosing research questions differently from those involved in community programs.  In addition to their practical and social applications, they may choose problems to research simply because they are interesting, or because they tie into other work that they or their colleagues are doing. Community service workers and others directly involved in programs, on the other hand, are concerned specifically with improving what they're doing so they can help to enhance the quality of life for the participants in their programs, and often for the community as a whole.  Since we assume that most people using this chapter of the Tool Box are likely to be practitioners in the community, let's look at some of the reasons they might pick a particular area to evaluate.

If you're running, or about to run, a program to affect a community issue or problem, you might want to know one or more of the following:

Is there a cause-and-effect relationship (i.e., does one action or condition directly cause another) between a particular action and a particular change?  Usually, you'll be concerned with this in terms of your program. (Does our smoking-cessation support group help members to quit smoking?) Sometimes, however, it might be important to look at it in terms of the community. (Does a smoking ban in public buildings, bars, and restaurants lead to a decrease in the number of community residents who smoke?)

If we try this new method, what will happen?

Will the program that worked in the next town, or the one that we read about in a professional journal, work with our population, or with our issue?

Why are you interested?

Some of the same differences between the concerns of researchers and the concerns of practitioners may hold here. Those interested primarily in research may simply be moved by curiosity or by the urge to solve a difficult problem. As a practitioner, on the other hand, you'll want to know the effects of what you're doing on the lives of participants or the community.

Your interest, therefore, might grow from:

  • Your experience with an issue and its consequences in a particular population or community
  • Your knowledge of promising interventions and their effects on similar issues
  • The uniqueness of the issue to your particular community or population
  • The similarity of the issue to other issues in your community, or the issue's interaction with other issues

Your interest as a community worker has to be considered in relation to your evaluation and the purpose of your program. Your basic intent is probably to improve things for the population or the community, but in what ways and by what means? Are you trying out some new things in the hope of making an already-successful program more successful? Are you importing a promising practice to see if it works with your population?  Are you trying to solve a particularly difficult professional problem?

A community mediation program found that it was having little success in cases involving adolescents.  After conferring with other similar programs - all of which were struggling with the same issue - mediators in the program devised a number of strategies to try to reach youth. The overall question they were concerned with - "Will these strategies make it possible to mediate successfully where teens are involved?" - was one with real consequences.

Is the issue you're addressing important to the community or to the society?

Media reports about or community attempts to address the issue are clear indicators that it is socially important. If it affects a particular group - violence in a given neighborhood, a high rate of heart disease among middle-aged Black males - it has an obvious impact on the community and society. If your program or intervention has the potential to help resolve the issue in other places, to be used by community workers in other fields, or to be applied in a number of ways, the importance of your analysis increases even further. If addressing the issue can lead to long-term positive social change, then the analysis is vitally important.

All of this affects your evaluation and the questions you ask. If the issue is one of social importance, then your evaluation of your work is socially important as well.  Are you addressing the aspects of your program or intervention that are of the greatest value to participants, the community, and society?  If not, how might you begin to do so?

How does the issue relate to the field?

The real question here is not whether the issue is important to the field - if it's important to the community, that's what matters. However, you should explore whether there's evidence from the field to apply to the issue.  Is what you're doing likely to be more effective than other approaches that have been tried?  If your approach isn't effective, are there other approaches out there that hold more promise? Can the published material about the issue help you understand it better, and give you better ideas about how to address it?

Is the issue general, rather than specific to your population or community?

Consider whether there is evidence that the issue occurs with a variety of populations and under a range of conditions. Also consider whether the observations or methods used to determine the issue's existence are accurate and whether they can be used in different situations and with different groups. Your evaluation may give you valuable information to pass on to practitioners in different fields or different circumstances.

Who might use the results of your evaluation?

If evaluation shows that your program or intervention is successful, that's obviously valuable information, especially if what you're evaluating is innovative and hasn't been tried before. Even if the evaluation turns up major problems with the intervention, that's still important information for others - it tells them what won't work, or what barriers have to be overcome in order to make it work.

Some of those who might use your results include individuals and groups affected by the issue; service providers and others who have to deal with the problem (in the case of youth violence, for instance, this last group might include police, school officials, small business owners, parents, and medical personnel, among others); advocates and community activists; and public officials and other policy makers.

Whose issue is it?

Who has to change in order to address the issue? The focus of the intervention will tell you whom the evaluation should focus on.

Some possibilities:

  • Those directly affected by the problem
  • Those in direct personal contact with those directly affected: parents, spouses and children, other relatives, friends, neighbors, coworkers
  • Those who serve or otherwise deal with those directly affected: medical professionals, police, teachers, social workers, therapists, etc.
  • Administrators and others who serve or deal with those indirectly affected: hospital or clinic directors, police chiefs, school principals, agency directors, etc.
  • Appointed or elected officials and other policy makers

Why is it necessary to choose evaluation questions carefully?

You know why you're running your program.  Evaluating it should just be a matter of deciding whether things are better when you evaluate than they were before you started, right?  Well, actually...wrong.  It's not that simple. First of all, you need to determine what "things" you are actually looking at (remember the school lunch example?)  Second, you will need to consider how you will determine what you're doing right, and what you need to change. Here's a partial list of reasons why choosing questions beforehand is important.

  • It helps you understand what effects different parts of your effort are having. By framing questions carefully, you can evaluate different parts of your effort. If you add an element after the start of the program, for instance, you may be able to see its effect separate from that of the rest of the program...if you focus on examining it.  By the same token, you can look at different possible effects of the program as a whole. (Do adult basic education learners read more as a result of being in a program?  Are they more likely to register to vote? Do their children improve their school performance?)
  • It makes you clearly define what it is you're trying to do  What you decide to evaluate defines what you hope to accomplish. Choosing evaluation questions at the start of a program or effort makes clear what you're trying to change, and what you want your results to be.
  • It shows you where you need to make changes. Carefully choosing questions and making them specific to your real objectives should tell you exactly where the program is doing well and where the program isn't having the intended effect.
  • It highlights unintended consequences. When you find unusual answers to the questions you choose, it often means that your program has had some effects you didn't expect. Sometimes these effects are positive - not only did people in the heart-healthy exercise program gain in fitness, but a majority of them report changing their diet for the better and losing weight as well - sometimes negative - obese children in a healthy eating program actually gained weight, even though they were eating a healthier diet - and sometimes neither. Like the side effects of medication, the unintended consequences of a program can be as important as the program itself. (In the case of the exercise program, the changes in diet might do as much as or more than the exercise to maintain heart health, for instance, and may point toward changing the focus of the program in some way.)
  • It guides your future choices. If you find that your program is particularly successful in certain ways and not in others, for example, you may decide to emphasize the successful areas more, or to completely change your approach in the unsuccessful areas. That, in turn, will change the emphasis of future evaluation as well.
  • In participant evaluations, evaluation involves stakeholders in setting the course of the program, thus making it more likely that it will meet community needs.
  • It provides focus for the evaluation and the program. Choosing evaluation questions carefully keeps you from becoming scattered and trying to do too many things at once, thereby diluting your effectiveness at all of them.
  • It determines what needs to be recorded in order to gather data for evaluation. A clear choice of evaluation questions makes the actual gathering of data much easier, since it usually makes obvious what kinds of records must be kept and what areas need to be examined.

When should you choose questions and plan the evaluation?

Evaluation questions, since they help shape your work, should be chosen and the evaluation planned when planning the overall program or effort. That gives you time and room for a participatory process, and gives you the chance to use the evaluation as an integral part of the program. As the program unfolds, you might find yourself adjusting or adding questions to reflect the reality of what is happening, but unless your original questions were misguided (you were wrong about what behavior had to change in order to produce certain results, for instance), they should serve you well.

Now let's discuss reality for many community based and grassroots programs. They're often understaffed and underfunded. Staff members may be underpaid, and may often work many more hours a week than they're paid for, because of their dedication to social justice and social change. Most or all program staff may even be volunteers, with full-time jobs and family responsibilities aside from their work in the program.  Initial evaluation in these circumstances is often anecdotal - i.e., based on participants' comments and stories about their progress and staff members' personal, informal observations.  A formal evaluation will probably wait until there's funding for it, or until someone has the time to coordinate or take charge of it.

In that case, the "when" becomes "as soon as you can."  You may be dealing with a program that has just started, or with one that's been operating for a long time. You may know that changes need to be made, or it may seem that the program is in fact meeting its goals. Whatever the situation, evaluation questions need to be chosen, and an evaluation planned that will give you the information you need to improve your work. Even with a program that's been going on for a while, the questions can still help you define or redefine your work, and will certainly help you improve it over the long term.

Who should be involved in choosing questions and planning the evaluation?

If you've consulted other sections of the Tool Box concerned with evaluation, you probably know that we advocate that all stakeholders be involved in planning the evaluation.  We believe that the best evaluation is participatory. That means that there is representation of the views and knowledge of people affected by the issue to be addressed. The list of potential participants is essentially the same as that under "Whose problem is it?" in the first part of this section: those directly affected and their close contacts; those who work with those directly affected, or who deal directly or indirectly with them and the issue; and public officials. To these groups, we might add other concerned citizens, and those indirectly affected by the issue. (A shop owner may not be a victim of neighborhood violence, but fear of that violence might nonetheless keep customers away from his shop, for instance.)

Evaluations that involve all stakeholders have a number of advantages over those conducted in a vacuum by outside evaluators or agency or program staff. They're more likely to reflect the real needs of the community, and they bring to bear the community's knowledge of its own context - history, relationships, culture, etc. - without which a program and its evaluation can go astray.

Participation can range from simple consultation before the fact to complete involvement in every aspect of an evaluation - assessment, planning, data gathering, analysis, and passing on the information.  In general, the greater the involvement of stakeholders, the better, but in-depth involvement of the stakeholders may not always be possible. There are time disadvantages to participatory evaluation - it takes longer - and there are logistical concerns, as well. Participants may have nothing in their backgrounds to prepare them for research, so training in a number of areas may be necessary, requiring skill, careful planning, and yet more time. The level of participation your evaluation can sustain, therefore, relies to some extent on your time constraints and your capacity to train and support participants.

How do you choose questions and plan the evaluation?

Choosing questions

When you choose evaluation questions, you're really choosing a research problem - what you want to examine with your research. (Evaluation, whether formal or informal, is in fact research.) You have to analyze the issue and your program, consider various ways they can be looked at, and choose the one(s) that most nearly tell you what you want to know about what you're doing.  Are you just trying to determine whether you're reaching the right people in sufficient numbers with your program?  Do you want to know how well an intervention is working with specific populations? What kinds of behavior changes, if any, are taking place as a result?  What the actual outcomes are for the community? Each of these - as well as each of the many other things you might want to know - implies a different set of evaluation questions. To find the questions that best suit your evaluation, there is a series of steps you can follow.

Describe the issue or problem you're addressing

A problem is a difference between some ideal condition (all people 10 years of age or older should be able to read; people should be able to find a decent job) and some actual condition in the community or society (a 25% illiteracy rate among those attending a particular high school; 50% unemployment among minority youths in a particular city). This may mean the absence of some positive factor (qualified teachers and adequate educational facilities; entry-level jobs that are reachable from minority neighborhoods) or the presence of some negative factor (students' difficulty with English; discrimination against minority job applicants), or some combination of these.

To describe the issue or problem:

  • Describe the ideal condition, including the positive factors present and the negative factors absent.  What should it look like if everything was as you'd want it to be?
  • Describe the actual conditions that constitute the prolem of interest, including the negative conditions present and the positive conditions absent.  What are conditions really like?
  • Describe the actual problem in terms of what you're hoping to change.  What positive factors do you want to produce and/or what negative factors do you want to eliminate?

Describe the importance of the problem

To be sure that this is a problem you really should be addressing, consider its importance to those affected and to the community.

  • Is the discrepancy between ideal and actual conditions of the kind and size to be considered important?
  • What are the consequences (positive and negative) of the problem?
  • Who experiences these consequences (i.e. program participants; their families, friends, and peers; service providers, policymakers, and others)? How many people are affected?
  • How often and for how long are they affected? What is the intensity of the effect?
  • How much does the fact that the problem is experienced to this degree by these people matter to them?

You might also ask whether the effects of the problem matter to society, but in fact, that shouldn't make a difference.  If they matter to the people who experience them, they're important.  Society doesn't always consider a problem important if it's only a problem for a minority, or for a group that's generally ignored (the poor, the homeless).

In light of these factors, decide whether the problem is important to the evaluation.

Describe those who contribute to the problem

Whose behavior, by its presence or absence, contributes to the problem?  Are they in the program participants' personal environment (participants themselves, family, friends), service environment (teachers, police), or broader environment (policymakers, media, general public)? For each of them, consider the types of behavior that, by their presence or absence, contribute to the discrepancy that constitutes the problem.

Assess the importance and feasibility of changing those behaviors

How important is each of these behaviors to solving the problem?  What are the chances that your effort can have any effect on each of them?

Describe the change objective

Based on the above analysis, choose behavior changes to target in specific people. Where you can, specify the desired levels of change in targeted behaviors and outcomes (those changes in conditions that should occur if the problem were to be solved).

For example, a behavior change goal might be an increase in pre-employment capacity - self-presentation, job-seeking, interview skills, interpersonal competence, resume writing, basic skills, etc. - for minority job seekers aged 18-24. Or you might instead or in addition target policy makers, with the goal of having them offer tax incentives to businesses that locate in or close to minority communities.

This is a way of defining your work. If you're planning the evaluation as you plan the program - as you would in the ideal situation - then the questions you're asking the evaluation to examine reflect the problems you're trying to solve, and this kind of analysis is important.  If you're starting an evaluation of a program that has been in place for some time, then you're going to have to do some figuring after the fact about what consequences you think (hope) the program is having, and what they will lead to.  You may be talking about changes in specific participant behaviors, about behaviors that act as indicators of other changes, or about results of another sort (participants gaining employment, for instance, which may have a direct relationship to participant behavior or may have more to do with local economic conditions).

Make sure that the expected changes would constitute a solution or substantial contribution to the problem

If you conclude that they would not result in a substantial contribution, revise your choice of problem and/or your selection of targeted people and actions as necessary. If you think that what you're looking at in an evaluation doesn't address the problem, then you should be looking at something else. If the objectives you've chosen do constitute all or a substantial part of a solution, you've found your questions.

Setting

Now that you've chosen your questions, there may be other factors to consider, such as the settings in which the evaluation will be conducted. If your program is relatively small and/or has only one site, this wouldn't be an issue. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program.

 There are some situations in which the choice setting may be important:

  • If your program is very large and/or has multiple sites
  • If different sites provide different services, activities, or conditions, or use different methods

Multiple sites

Multiple sites

Can present a challenge for an evaluation, because, although every effort may be made to make the program at all sites exactly the same, it will seldom be so. If the program relies on human interaction - teacher/learner, counselor/counselee, trainer/trainee, doctor/patient, etc. - there will be differences from site to site depending on the people staffing each. (The exception is when the same people staff all sites, providing the same services at each site at different times or on different days.)  Even if all are equally competent, no two staff members or teams will do things in exactly the same way or relate to participants in exactly the same way, and the differences can be reflected in differences in outcomes.  If methods or other factors vary from site to site, that will further complicate the situation.

Furthermore, the physical character of a site can influence not only program effectiveness, but also the recruitment of participants and whether or not they remain in the program long enough for it to have some effect (often called "retention.")  The site's layout, comfort, apparent safety and security, and - often most important - how easy it is to get to, all affect whether participants enroll and stay in the program.

Where you do have the capacity to evaluate all sites, it will be helpful to build into the evaluation a method of comparing them. This will allow you to identify and adopt at all sites methods, conditions, or activities that seem to make one site particularly successful, and to identify and change at all sites methods, conditions, or activities that seem to create barriers to success at others.

If you can't evaluate each site separately, you'll have to decide which one(s) will give you the information that will most help in adjusting and improving your program. If you're most concerned with assessing your overall effectiveness, this may mean evaluating the site(s) closest to the program norm, in terms of methods, conditions, activities, goals, participant/staff interaction, etc. If, on the other hand, your chief consideration is learning whether a particular new or unusual method or situation is working, you may find yourself evaluating the site(s) least like the others.

If sites appear only minimally different, some other considerations that may come into play are:

  • The number and character of participants at the site. Participants at a particular site may be experiencing the effects of the issue more severely, or may have a particular important characteristic, such as a language barrier.
  • The ability and willingness of participants and staff to support the evaluation research. If staff at a particular site are unable or unwilling to record observations, attendance, and other key information, or if site participants are unable or unwilling to be interviewed or monitored, evaluation at that site might be difficult.
  • The stability of the population at the site. If participants at a site come and go at a rapid rate - unless that's the program's intent - it can be difficult to gain information that contributes to an accurate evaluation.
  • An exception, of course, occurs here if one point of the evaluation is to find out why participants stay for so short a time, and to try to develop methods or create conditions to assist them to remain in the program long enough to reach their goals.

Sites with different methods, conditions, activities, or services

Programs sometimes are organized so that different methods are used or different services provided at different sites. In other cases, conditions may vary from site to site because of the sites' geographical locations or the available space. The ideal situation is to evaluate all sites and compare the effects of the different methods, conditions, or services. When that's not possible, you'll have to decide what's most important to find out.

If the methods, services, or conditions at a particular site are new or innovative, you may want to evaluate them, rather than those that have a track record. There may be a particular method or service that you want to evaluate, in which case the decision about which site to choose is obvious. The decision should be based on what makes the most sense for your program, and what will give you the best information to improve its effectiveness.

When you have the capacity to choose more than one site to evaluate, it often makes sense to choose two or three sites that are different - especially if each is representative of other sites in the program or of program initiatives - so that you can compare their effectiveness. Even where sites are essentially similar, you'll get more information by evaluating as many as you can.

Participants

Another factor to consider is the participants whose behavior, activity, or circumstances will be evaluated. If your program is relatively small this might not be an issue - the participants will simply be all those in the program. However, if you don't have the resources - whether finances, time, or personnel - to evaluate the whole program, there are some situations in which the choice of participants may be important:

  • If your program includes different groups of participants (groups that are in different stages of the program, or that are exposed to different methods or services).
  • If groups of participants belong to populations with distinctly different cultures, stemming from race, ethnicity, class, religion, or other factors.

Multiple groups

There are a number of reasons why there might be multiple groups of participants in a program. You might start different groups at different times, either because the program has a rolling start schedule (when there are enough people for a class/training group, one will begin), or because the program is aimed at different groups (for example, 5 year-olds, 8-year-olds, and 14-year-olds). You might also be trying different strategies with different groups.

The Brookline Early Education Project (BEEP), a program aimed at school readiness for children aged pre-birth through 5, recruited pregnant families in three cohorts over the course of three years.  In addition, families in each cohort were assigned to one of three levels of service. Thus, there were actually nine different groups among BEEP participants, even though, by the third year, all were receiving services at the same time.

Once again, if there's no problem in evaluating the whole program, participants will simply include everyone. If that's not possible, there are a number of potential choices:

Evaluate your work with only one group, with the expectation that work with the others will be evaluated in the future. In this case, you'd probably want to choose the one for whom you consider the program most crucial. They might be at greater risk (of heart attack, of school failure, of homelessness, etc.) or might be experiencing the issue at a high level of intensity (daily shooting incidents in the neighborhood, high rates of teen pregnancy, massive unemployment).

Include a small number (2-4) of groups in your evaluation. You might want to choose groups with contrasting characteristics (different ages, for example, or addressed by different strategies). On the other hand, depending on the focus of your evaluation, you might want groups that are essentially similar, to see whether your work is consistent in its effects.

Choose a few participants from each group to focus your evaluation on.  While this won't give you a complete picture, it should give you enough information to tell where your program is accomplishing its goals and where it needs improvement. The differences in the ways participants in different groups respond to the program (assuming there are differences) can also give you ideas for ways to change what you're doing.

Participants from different populations and cultures

Cultural factors can have an enormous effect on participants' responses to a program. They can govern conceptions of social roles, family responsibilities, acceptable and unacceptable behavior, attitudes toward authority (and who constitutes authority), allowable topics of conversation, morality, the role of religion - the list goes on and on. In planning a program that involves members of different populations and cultures, you essentially have three choices:

  • Plan your program and implement it in the same way for everyone. If the program involves groups - classes, support groups, etc. - participants' membership is determined not by population group but by when they sign up, what time of day they can attend, what they sign up for, or whatever other criteria make sense logistically.
  • Plan your program to be as culturally sensitive as possible, and try to screen out anything that might be offensive to or difficult for any group. In this instance, you might be prepared to respond if participants from a particular population requested a group of their own.
  • Divide participants by cultural group and plan different culturally sensitive approaches for each. Your overall approach might be the same for everyone, but the way you apply it might differ by culture.

In any of these instances, it would probably be important to understand how well your approach is working with members of the various populations.  If you can evaluate the whole program, make sure that you include enough members of each group so that you can compare results (and their opinions of the program) among them.  If your evaluation possibilities are limited, then your choices are similar to those for multiple groups of other kinds, and will depend on what exactly is most useful for you.

There are interactions between the choice of sites and the choice of participants here. You may be concerned about the effects of your program on a particular population, which may be largely concentrated at one site.  In that case, if you have limited resources, you may want to evaluate only that site, or that site and one other.

Regardless of other considerations, you may want to set some guidelines about whom you include in the evaluation. How long do people have to be in the program, for instance, before they're included? In other words, what constitutes participation? (This also sets a criterion for who should be counted as a drop-out: anyone who starts, but leaves before meeting the standard for participation.) What about those whose attendance is spotty - a few days here, a few days there, sometimes with weeks in between? Do they have to have attended a certain number of hours to be considered participants?

These issues can be more complex than they seem. People may start and drop out of a program numerous times, and then finally come back and complete it. Many others start programs numerous times, and never complete them. It's usually impossible to tell the difference until someone actually gets to the point of completion, whatever that means for the particular program.

In a reversal of the start-many-times-before-completing scenario, there can be a few people who stay in a program right up till the end and then drop out. This may have to do with the fear of having to cope with success and a change in self-image, or it may simply be a pattern the person has learned to follow, and will have to unlearn before being able to complete the program.

Should any or all of these people be included in or excluded from an evaluation, either before (because of their history in the program) or after the fact? That's a decision you'll have to make, based on what their inclusion or exclusion will tell you. Just be sure that your evaluation clearly describes the criteria that you decide to use for your participants.

If you're an outside evaluator or academic or other independent researcher

Up to this point, we've largely ignored the evaluation difficulties faced by evaluators not directly connected with the organization or institution running the program they're evaluating. If you've been hired or designated by the organization or a funder to evaluate the program, you have to establish trust, both with the organization and its staff and with participants, if you hope to get accurate information to work with. You also have to learn enough in a short period about the community, the organization, the program, and the participants to devise a good evaluation plan, and to analyze the data you and others gather.

If you're an independent researcher - a graduate student, an academic, a journalist - you face even greater obstacles. First, you have to find a place to conduct your research - a program to evaluate - that fits in with your research interests. Then, you have to convince the organization running that program to allow you to do the research.Once you've jumped that hurdle, you're still faced with all the same tasks as an outside evaluator: establishing trust, understanding the context, etc.

Let's look first at the process you as an independent researcher might follow in order to choose and gain access to a setting appropriate to your interests. Once you've gained that access, you've become an outside evaluator, so from that point on, the course of preparing for the evaluation will be the same for both.

Choose a setting

If you're an academic or student, you can probably find an appropriate program by asking colleagues, professors, and other researchers at your institution. If none of them knows of one offhand, someone can almost undoubtedly put you in touch with human service agencies and others who will. Other possible sources of information include the Internet, funders, professional associations, health and human service coalitions, and community organizations. Public funding information is often available on the web, in libraries, or in newspaper archives. The wider you spread your net, the more likely you are to find the program you're looking for.

The right program will obviously vary depending on your research interests, but some questions that will inform your choice include:

  • Does the setting include people who are actually experiencing the problem that is of interest to you?
  • Is the setting similar to others of this type? (If not, its program might not be useful to others dealing with the issue, even if it works well in its own context.)
  • Does the setting provide support for the research? Will staff, participants, and others help with data gathering, be forthcoming about context questions, cooperate with you?
  • Does the setting have the resources to maintain the program after your evaluation is done?
  • Does the setting permit the changes in operation required by the research?  If the planning of the evaluation and choosing of questions point to doing things differently, can and will the program make the necessary changes?
  • Is the setting accessible?
  • Accessibility includes not only handicap accessibility, but whether a site is in a neighborhood that feels welcoming or safe to participants, whether it is easily reachable by public transportation or on foot from the areas from which participants are drawn, and whether it is in a building or institution that doesn't feel intimidating or strange (a university campus or building can seem as threatening as a fortress to someone who is insecure about his educational background, for example.) Accessibility can be the determining factor in whether participants consider a program, or whether they stay in it.
  • Is the setting stable? Are the program and organization stable enough that you know they'll be able to support their work at the current level, at least until the evaluation is completed?

Once you've found an appropriate setting, you'll have to convince the organization to collaborate with you on an evaluation. The next three steps are directed toward that goal.

Learn as much as you can about the organization you've chosen

Just as you wouldn't go to a job interview without doing some research about the employer, you shouldn't try to gain the cooperation of an organization without knowing something about it - its mission, its goals, whom it serves, who the director and board members are, etc. If someone told you about the organization, she may have, or may know someone who has, much of the information you need. If the organization maintains a website, much of that information will be available there. If it's incorporated, the office of the Secretary of the state of incorporation and/or other state offices will have information about the officers (i.e., the Board of Directors) and other aspects of the organization. Funding agencies may also have information that's a matter of public record, including proposals.

Contact the appropriate person(s) and request an interview

Find out whom (by name as well as position) you should talk to about conducting a research project in the organization you've chosen.

Depending on the organization, this could be the board president, the executive director, or the program director (if the program you're interested in is only part of a larger organization).  In any case, it might be wise to involve the program director even if he's not the final decision-maker, since his cooperation will be crucial for the completion of your research.

  • If you can, get a personal introduction. It's always best if you come recommended by someone familiar with the person you need to speak with.
  • If you can't get a personal introduction, it's usually best to send a letter requesting a meeting and explaining why, and follow it up with a phone call.
  • Before the meeting, send a proposal outlining what you want to do. This should be substantive enough to help the organization decide whether it wants to work with you, but not so specific that it doesn't allow for collaborative planning of the evaluation.

Plan and prepare for the initial meeting

There are several purposes for this meeting, besides the ultimate one of getting permission and support for your project (or at least an agreement to continue to discuss the possibility). They include:

  • Establishing your credentials - the experience, educational background, and any other factors that equip you to conduct this evaluation. This might include references from colleagues, professors, or other organizations you've worked with.
  • Learning more about the program and the organization
  • Explaining what you want to do and why, what form the evaluation results are likely to take, what you'll do with them, who'll have access, etc. This explanation should also cover issues of confidentiality and permission of participants.
  • Explaining what you need from the organization and/or program - participation of participants and staff, for instance, any logistical support, access to records, or access to program activities
  • Explaining what you're offering in return - your services for a comprehensive formal evaluation, any stipends, equipment or materials, other support services, or whatever else you may have to offer
  • Clarifying the organization's needs, and discussing how they fit with your own - and how both can be satisfied

Assuming that your presentation has been convincing, and you're now the program evaluator, the rest of the steps here apply to both independent researchers and outside evaluators.

Find out all you can about the context

This may play out differently for outside evaluators than it does for independent researchers, but it's equally important for both. It means finding out all you can about the community, the organization, the program, and the participants beforehand - the social structure of the community and where participants fit in it, the history of the issue in question, how the organization is viewed, relationships among groups and individuals, community politics, etc.

If you're an outside evaluator, you can pick the brains of program administrators, staff, and participants about the community, the organization, and the issue. Ask them to steer you to others - community leaders, officials, longtime residents, clergy, trusted members of particular groups - who can give you their perspectives as well. If possible, get to know the community physically: walk and/or drive around it, visit businesses, parks, restaurants, the library. Understanding how the issue plays out in the community, the nature of relationships among groups and individuals, and what life is like in the neighborhoods where participants live will help a great deal in analyzing the evaluation of the program.

If you're an independent researcher, learn as much about the context as you can before you contact the program. Websites (for the organization and/or the community) and libraries are two possible sources of information, as are community and organization literature and people who know the community.  Learning about the community, the organization, and the participants beforehand will both help you determine whether this program fits with your research and help you advocate for its cooperation with your project. Once you have that cooperation, you can follow the same path as an outside evaluator (since that's what you are) to learn as much about the context of the program as you can.

Establish trust with program administrators, staff, and participants

This can be the most difficult part of an evaluation for someone from outside the organization. There's no magic bullet or predictable timeline, but there are several things you can do:

  • Be yourself.  Don't feel you have to act a certain way: deal with people in the program as you do with friends and acquaintances in other circumstances. People can tell when you're being false, and are unlikely to trust you if you are.
  • Treat everyone with equal respect, as colleagues in a research project.
  • Don't assume you know more than anyone else just because you're the professional.
  • Share freely what you do know, but don't tie yourself to any one process or method, especially in response to an opposite stance from a key individual.
  • Ask administrators, staff, and participants what they want from the evaluation, and discuss how the evaluation could provide it.
  • Don't be afraid to say "I don't know, but I'll find out," and then do.
  • Follow through on whatever you say you'll do. Don't promise anything you can't deliver on, and make deadlines reasonable, so you can meet them.

General tips for all evaluators

These steps apply to everyone, internal evaluators as well as external.

Aim for a participatory evaluation

We've discussed above the involvement of all stakeholders to the extent possible. Involving participants, program staff, and other stakeholders in participatory planning and research can often get you the most accurate data, and may give you entry to people and places you normally might not have. On the other hand, participatory planning and research, as we've explained, takes time and energy. If you have limited time, you may not be able to set up a fully participatory project.  You can, however, still consult with stakeholders, and involve them in ways that don't necessarily involve training or large amounts of your time. They can help you line up interviews with participants or other important informants, for instance, and/or act as informants themselves about community conditions and relationships.

At least the people in charge of the program, and probably those implementing it as well, will expect to be part of the planning of the evaluation. They are, after all, the ones who need to know whether their work is effective, and how to improve it. Involving participants as well, in roles ranging from informants about context to actual researchers, is likely to enrich the quantity and quality of the information you can obtain.

Plan the evaluation, in collaboration with stakeholders

That collaboration should be at the highest level of participation possible, given the nature of the program, the time available, and the capacity of those involved (if program participants are five-year-olds, they probably have relatively little to contribute to evaluation planning...but their parents might want to be involved.)

The actual planning involves ten different areas, each of which will be the subject of one of the remaining sections in this chapter:

  • Information gathering and synthesis
  • Designing an observational system
  • Developing and testing a prototype intervention
  • Selecting an appropriate experimental design
  • Collecting and analyzing data
  • Gathering and interpreting ethnographic information
  • Collecting and using archival data
  • Encouraging participation throughout the research
  • Refining the intervention based on the evaluation
  • Preparing the evaluation results for dissemination

Once the planning is done, it's time to get started on conducting the evaluation. And when you're finished - having analyzed the information and planned and made the changes that were needed - it's time to start the process again, so that you can determine whether those changes had the effects you intended.  Evaluation, like so much of community work, is a process that goes on as long as the work itself does.  It's absolutely essential to the continued improvement of your program.

In Summary

Choosing evaluation questions - the areas in your work you'll examine as part of your evaluation of your program - is key to defining exactly what it is you're trying to accomplish. For that reason, those questions should be chosen carefully as part of the planning process for the program itself, so that the questions can guide your work as well as your evaluation of it. The more that stakeholders can be involved in that choice and planning, the more likely you are to create a program that successfully meets its goals serving the community.

Choosing those questions well entails understanding the context of the program - the community, participants, the culture of any groups involved, the history of the issue and of the social structure of the community and the organization - and (if you're an outside evaluator without ties to the program) establishing trust with administrators, staff members, and participants. That trust will enable you to conduct a participatory evaluation that draws on the knowledge and talents of all stakeholders, and to plan an evaluation that fits the goals of the program and accurately analyzes its strengths and weaknesses. With that analysis in hand, you'll be able to make changes to improve the program. Then you're ready to start the whole process again, so you can evaluate the effects of the changes you've made.

Contributor 
Stephen B. Fawcett
Phil Rabinowitz

Online Resources

CDC Evaluation Brief: Developing Process Evaluation Questions addresses how to develop process evaluation questions, including a step-by-step process to formulating questions.

From the Introduction to Program Evaluation for Public Health Programs, this resource from CDC on Focus the Evaluation Design offers a variety of program evaluation-related information.

The Magenta Book - Guidance for Evaluation provides an in-depth look at evaluation. Part A is designed for policy makers. It sets out what evaluation is, and what the benefits of good evaluation are. It explains in simple terms the requirements for good evaluation, and some straightforward steps that policy makers can take to make a good evaluation of their intervention more feasible. Part B is more technical, and is aimed at analysts and interested policy makers.It discusses in more detail the key steps to follow when planning and undertaking an evaluation and how to answer evaluation research questions using different evaluation research designs. It also discusses approaches to the interpretation and assimilation of evaluation evidence.

Performance Measurement for Public Health Policy is a new tool designed by APHA and the Public Health Foundation to help health departments and their partners assess and improve the performance of their policy activities; this tool is the first to focus explicitly on performance measurement for public health policy. The first section of the tool gives a brief overview of the role of health departments in public health policy, followed by an introduction to performance measurement within the context of performance management. It also includes a framework on page 5 for conceptualizing the goals and activities of policy work in a health department. The second section of the tool consists of tables with examples of activities that a health department might engage in and sample measures and outcomes for these activities. The final section of the tool provides three examples of how a health department might apply performance measurement and the sample measures to assess its policy activities.

Specify the Key Evaluation Questions is a resource provided by Better Evaluation.  It offers several links to guides, tools, and examples to assist in developing effective evaluation questions.

Print Resources

Chen, H.T. (2004). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. New York, NY: SAGE.

Holden, D.J. & Zimmerman, M.A. (2008). A practical guide to program evaluation planning. New York, NY: SAGE.

Fawcett, S., Suarez, Y., Balcazar, E., White, W., Paine, A., Blanchard, K., & Embree, M. (1994). "Conducting Intervention Research: The Design and Development Process."  In J. Rothman and E.J. Thomas (Eds.), Intervention Research: Design and Development for Human Service (pp. 25-54).  New York, NY: Haworth Press.

Fawcett, S, Boothroyd, R.,  Schultz, J., Vincent ,F., Carson, V.,& Bremby. R. (2003). Building Capacity for Participatory Evaluation within Community Initiatives. Journal of Prevention and Intervention in the Community, 26, 21-36.

Wholey, J.S. & Hatry, H.P. (2010). Handbook of practical program evaluation. Hoboken, NJ: Wiley.