Today’s executive director must manage, of course; but that’s only a small part of the job. The tough part is evolving from manager to leader. It’s not about hierarchy. It’s about injecting your vision to advance mission throughout your organization.

Here we offer up guidance and some of the best resources we know of to help make the transition. In fact, this entire website helps to support you in this effort. Whether it’s communications, fundraising or governance, we want everyone who cares about your mission to be guided by a bigger picture vision.

Evaluation FAQ’S

An evaluation plan is a roadmap that identifies the goals and ways in which you’ll collect and analyze data. This includes which information you’ll collect, along with how, where and when you’ll collect it. It identifies your research methods, those responsible for carrying out the plan, timelines and budget.

One of the most important aspects of your plan is articulating the questions that the evaluation will be structured to answer. Frequently this will relate to both outputs (e.g., the specifics of what is being done – services provided, number of people served) and outcomes (i.e., the actual change that resulted from the program.)

For example, let’s say you run a technology training program for at-risk youth. Output-oriented questions might look at the number of trainings conducted, number of people served, retention rates, and things like meetings with community leaders or policymakers. Questions geared toward outcomes might measure beneficiaries’ increase in skills, changed attitudes, behavior changes, etc.

The number one reason you should create an evaluation plan is because you want to deliver the very best programs and services. An evaluation plan helps you to refine your data collection and assessment practices so that the information you glean is most useful to advancing your mission and the objectives of the program. It also helps to establish a culture of evaluation within your organization whereby people are always thinking about how to make sure the necessary information is being gathered to improve programs.

Evaluation results can also be extremely useful communications tools that help you more efficiently respond to your funders’ needs. They can be significant credibility builders that increase your capacity to raise funds for the program.

Need help identifying appropriate outcomes for your program?

The Center for What Works (www.whatworks.org) has teamed up with the Urban Institute to develop recommended indicators for nonprofits working in 14 different issue areas.

A process evaluation looks at the actual development and implementation of a particular program. It establishes whether you’ve hit quantifiable targets and implemented strategies as planned. It’s typically done at the end of the project and it looks at the program from start to finish, assessing cause-and-effect relationships between the program components and outcomes. This type of evaluation can be very useful in determining whether a program should be continued, expanded upon, refined or eliminated.

While they can be done separately, outcome and impact evaluations are also important additions to a process evaluation. Outcome evaluation measures the change that has occurred as a result of a program. For example, your process evaluation might confirm that 200 people have completed your skills-training program. An outcome evaluation would tell you how many of those demonstrated increased confidence, changed behaviors, found jobs because of the new skills, etc.

An impact evaluation looks at the long-term, deeper changes that have resulted from that program. This type of evaluation could, for example, suggest that the changes to your skills-training participants’ lives continued over time and perhaps transferred across generations.

While the outcome evaluation tells us what kind of change has occurred, an impact evaluation paints a picture as to how a program might have affected participants’ lives on a broader scale.

It’s important to note that certain types of evaluation are more involved than others. For example, while certain outcomes can be easily and reliably measured, true impact measurement is a much trickier business. In its truest sense, impact measurement often involves using an independent evaluator, establishing control groups, and measuring changes over extended periods of time. This can be extremely costly, and reliable results may take years to emerge (depending on the nature of the program, of course).

We raise these issues not to sway you from impact evaluation. Rather, we want to paint a complete-enough picture to encourage you to invest the resources when the time is right. In other words, if your program’s content is potentially replicable and highly impactful – an HIV prevention program or a curriculum intended to connect young adults to good jobs, for example – then it’s probably worth it to find the necessary funding to have it evaluated at full scale.

A program logic model is an important component of evaluation planning because it helps you to identify the most relevant evaluation questions. It shows both what the program is supposed to do and how its components will lead to outcomes.

There’s probably more “hype” around the term than the idea. In fact, there’s a good chance you’re already using a logic model and just haven’t put it down on paper yet. Logic models are often presented in complex terms, but the concept itself isn’t that complicated. At its most basic level, a logic model is a graphic or roadmap that shows how your program is intended to work. It depicts a linear path from your assumptions to your process, to expected outcomes and impact.

The W.K. Kellogg Foundation describes a basic logic model as a pathway that starts with resources/inputs and then moves toward activities, outputs, outcomes and finally, impact.

Fortunately, there are a lot of free resources to help nonprofits develop a program logic model. The Kellogg Foundation’s freely available Logic Model Development Guide and Innovation Network’s free online Logic Model Builder can be extremely useful to anyone looking for assistance in this area. Importantly, both of these resources offer guidance and tools to help nonprofits connect the logic model to an evaluation plan.

It’s important to note that the process of developing an effective logic model will require some thought and preparation. For organizations looking for guidance through that process, the Center for Nonprofit Management provides coaching opportunities to help make the most of available tools. For more information, visit www.cnmsocal.org and click on the “Consulting & Coaching” tab.

The Joint Committee on Standards for Educational Evaluation (www.jcsee.org) has defined four main principles that underlie good evaluation:

  1. Utility: This ensures that the evaluation is collecting credible, useful, timely information. The purpose of an evaluation is to determine what works and how, and to inform decision-making. If it doesn’t address current needs and realities, then there’s really no sense in moving forward.
  2. Feasibility: This principle prioritizes evaluation that is practical, cost-effective and politically viable.
  3. Propriety: This relates to legal and ethical standards that should govern an evaluation, including careful consideration of those involved as well as those who might be impacted by the results.
  4. Accuracy: The data yielded from an evaluation must be accurate in order to be useful (and to ensure your organization’s credibility). This is connected to the rigor of your evaluation plan, data collection methods, and willingness to report the bad just as much as the good.

To the last point, as much as it might be tempting to focus on only the positive, doing so erodes the credibility of your evaluation and your organization. Most donors, and people in general, understand that failure is instructive. For a nonprofit program, it can lead to the refinement or refocus that ultimately creates positive change. On the other hand, you’d be hard-pressed to find anyone understanding of an organization that swept negative findings under the rug. Our point? Be comprehensive in your reliability. Just as in life, it doesn’t mean much to be half reliable.

The USDA’s Food and Nutrition Service (www.fns.usda.gov) provides a range of useful guidelines in Principles of Sound Impact Evaluation. While the full publication is intended for nutrition educators, the general guidelines put forth will apply to many nonprofits. Specifically:

  • Make certain that the program components can realistically be evaluated.
  • Build on available research.
  • Choose measures that fit what you’re doing and approach existing standards for credible assessment.
  • Observe ethical standards for the fair treatment of study participants.
  • Measure both process and outcome.
  • Report both positive and negative results – but do so accurately.
  • Share results to maximize their value.

Finally, your evaluation design should flow from your logic model. Since your logic model explains how you’ll create change, the evaluation is linked to it in that it’ll confirm where you are on target and what needs to be refined.

Adapted in part from the Joint Committee on Standards for Educational Evaluation’s “Program Evaluation Standards Statements.”

Ensuring the effectiveness of a nonprofit’s programs is among a board of directors’ chief responsibilities. Effective evaluation helps the board carry out that duty. The board’s specific level of involvement, however, depends on a number of factors, including the organization’s size, the scale and potential impact of the evaluation, and staff capacity.

A board member has a responsibility to determine your organization’s internal capacity for evaluation and assess the financial feasibility of the evaluation plan. From there, he or she might be involved in everything from tracking planning and ensuring that the right questions are being asked, to confirming the objectivity and integrity of the results. For a smaller organization where evaluations are not the norm, the board should review the evaluation plan, provide input as it’s refined, and ultimately authorize the plan. However, this is likely impractical for a large organization, where involvement may be limited to reviewing findings.

Given the board’s role in guiding and authorizing the programmatic direction of an organization, evaluation results should be of utmost interest to the members collectively and individually. These help board members to understand whether the organization is meeting its goals; the results also help them to set strategic priorities.

An individual board member’s specific role in evaluation will likely be determined by his or her interest level and expertise. For example, if you’re fortunate enough to have a board member with a background in research methods, you’ll likely want to actively engage this individual in evaluation planning. Your board chair will also have an important role to play in terms of championing the evaluation plan and explaining the board’s role to individual members.

It’s also important for the executive director to communicate with the board members and establish clear expectations around their role in evaluation. Many executive directors are surprised to find that their expectations are not necessarily in line with what the board expects, so good communication is essential.

This is a common issue facing nonprofits of all sizes. The reality, however, is that a properly executed evaluation can actually save your organization money – or at least bring in more of it – in the long-run. For example, let’s say you run a program that delivers low-cost food to those who need it, but you’ve experienced a lack of return beneficiaries, which are critical to your revenue stream.

You could assume that attrition is the result of reduced need, but that’s unlikely for everyone. An evaluation could reveal that your hours are not meeting the end-users’ needs. In that case, you might just change your hours. Alternatively, you might find that the products you offer aren’t meeting their needs, your services are duplicative of another organization’s, or that your intake process is too time-consuming. All of these can be easily remedied, but you won’t know what to fix if you don’t evaluate.

In addition, reliable evaluation results can serve as an important PR tool and may boost your credibility in the eyes of funders. You might also find that philanthropic funders are willing to include a line-item in your budget to enable you to effectively evaluate the program they’re funding. An evaluation is frequently of great interest to funders as it allows them to understand whether and how their investments are having the greatest impact. These are among the reasons that many funders’ grant proposal formats require that you indicate how you will measure impact.

Internal resistance to evaluation is a common issue for many nonprofit organizations. It could be that staff feel the program is moving along just fine so evaluation is seen as a waste of precious time. Other frequent concerns are that the evaluation findings might result in a discontinued program, lost jobs or an increased workload. Sometimes resistance is related to staff insecurities and lack of experience with evaluation.

Strong internal communication is the key to generating the internal buy-in necessary to carry out an effective evaluation. Be candid about the purpose of the evaluation and your team’s role in carrying it out. Don’t be afraid to address the hard questions (e.g., “What if the findings are negative?”), and above all, stress the evaluation’s role in advancing your mission and program objectives. To increase comfort levels and fully engage your staff, invite them to ask questions and provide suggestions about the evaluation. Consider establishing a system that invites this input anonymously so your staff can raise issues that might be considered sensitive. Also keep in mind that devoting staff meetings to training and/or engaging a consultant to provide technical assistance can help allay fears and move the process forward.

It’s also very likely that your program staff will be involved in data collection. They may, for example, be responsible for completing intake forms or administering pre- and post-service surveys. It’s critical that staff understand the importance of these activities and their relationship to program planning, refinement and continued funding. To that end, it’s important to build internal capacity for evaluation. Key staff need to understand, feel confident and have a good working knowledge of evaluation to support the effort and contribute effectively. If you aren’t completely comfortable in this capacity-building role, consider engaging an expert in the area.

Finally, take steps to ensure that staff are comfortable with their roles, have the opportunity to ask questions and suggest refinements, and are involved in the design process. After all, your program staff are often in the best position to know what information can be gleaned from your beneficiaries.

We already collect a lot of information for our funders that is not helpful to us.

At some time or another, most nonprofit executives have been involved in collecting data required by their funders that they feel don’t benefit the program or its clients. While the nonprofit sector will likely never be completely free of onerous data collection requirements, an evaluation plan can help you to build on what’s required in a way that is beneficial to your mission.

For example, let’s say you run a mentoring program and your funder requires demographic data regarding your beneficiaries. This will most certainly involve some kind of standard data collection method, such as an intake form. Now let’s assume that you’re more interested in knowing whether your beneficiaries are performing better in school. You might modify the intake form to gather baseline information about student performance and then do an annual survey to track changes. (Note: This is a cursory example to demonstrate how a data collection requirement might be modified. In this example, and most others, a lot of other factors would need to be addressed – such as a control group, in this example – to generate reliable data.)

You might also find that engaging your funder(s) in evaluation planning yields valuable insights (program officers at large foundations often have experience in evaluation), builds the relationship, and presents an opportunity to identify evaluation questions that serve the funder and the program. For example, elsewhere in this section we talk about measurement of outputs and outcomes. Outputs are often required by funders, while outcomes address the meaningful change that has occurred. By engaging your funders in a dialogue on outcomes, you’re opening up the lines of communication to establish a system that benefits everyone.

How do we choose, given our limited resources?

For an organization with only one main program, the answer is easy: Yes, evaluate everything. However, things are quite a bit more complicated for those with multiple programs and especially multiple locations. In an ideal world, you’d be able to evaluate all of your programs. Economic realities being what they are, however, this may not be possible right away.

Following are some criteria to consider when determining which programs to place at the top of the list:

  • Anything that might be considered a signature program or represents a growth area for your organization.
  • Work that is (or should be) in a growth stage and appears especially promising.
  • Programs that reach relatively large numbers of people.
  • Those with relatively large budgets (as a percentage of your overall budget).
  • Programs that, when evaluated, have the potential to demonstrate the importance of your mission and approach.
  • Those that can potentially be scaled in other communities or otherwise inform the broader field of practice.
  • Programs that funders require be evaluated.

The Colorado Nonprofit Association (www.coloradononprofits.org) suggests that nonprofits consider piloting an evaluation approach in one or two programs before rolling it out organization-wide. This enables the organization to better understand the time and financial implications associated with evaluation, as well as determine whether the evaluation is actually yielding the information it wants.

Using the services of a consultant can be beneficial in several instances, such as when you don’t have the necessary expertise or time, when you’ll benefit from an objective point of view (and one could argue that this is always the case), or when it’s required by a funder.

To the latter point, some government and philanthropic funding will require an independent evaluation. In that case, the donor institution will likely select – or at least approve – the evaluation consultant. If this is the case, consider yourself lucky and push for the most rigorous evaluation possible. Your organization and your field will be better for it.

If you do decide to utilize the services of a consultant, there are many resources available to help you select the right one. For example, the American Evaluation Association (www.eval.org) offers a list of member evaluators by location and areas of expertise. You might also consider asking trusted colleagues and others in your field for recommendations. For organizations located in Southern California, the Center for Nonprofit Management (www.cnmsocal.org) provides trainings and workshops to help guide you through this process.

It’s reasonable to expect potential consultants to develop a proposal that outlines their experience, approach to the project, past work samples, references and budget. Soliciting proposals from several evaluators will help you to understand the range of possibilities and enable you and your board to make a fully informed decision. You’ll also want to interview candidates with your board to ensure that the evaluator you choose is a good match for your organization’s style and culture.

One final note: A good consultant can also serve an important role in helping your staff and board understand the basic principles of evaluation and the role it plays in carrying out your work. To that end, organizations often find it helpful to build in an internal education component (e.g., a special meeting or brown-bag lunch that addresses the topic and answers questions) to the consultant’s scope of work.

Need help choosing a consultant?
Visit managementhelp.org (www.managementhelp.org/staffing/consulting.htm) for step-by-step tips.

If you can’t find what you’re looking for, just ask our experts. We’ll be happy to provide the answer.

Ask An Expert