By Todd Cohen
Mercy Housing and Shelter Corp. in Hartford, Conn., assumed the main users of its soup kitchen were homeless individuals who frequented the shelter overnight.
But the agency learned from an evaluation it conducted that while half the clients of the soup kitchen were in fact homeless, the other half were families and children living in apartments in the neighborhood.
The evaluation also found that while food was the biggest need for all clients of the soup kitchen, a second big need was for medical services.
So the agency, which had a clinic in its facility, now is moving its meals program to new space on the second floor and creating a medical suite on the lower level.
Mercy is one of roughly 30 agencies in north-central Connecticut that have participated in “Building Evaluation Capacity,” an 18-month program the Hartford Foundation for Public Giving offers to its grant recipients.
Modeled on a program developed by the Bruner Foundation in Rochester, N.Y., that is designed to help nonprofits “think evaluatively and use evaluation in a strategic way,” the effort by the Hartford Foundation aims to help nonprofits better understand and use evaluation to track and improve their impact, says Annemarie Riemer, director of the foundation’s Nonprofit Support Program.
The foundation is one of a growing number of funders that are using metrics to evaluate the impact of their grantmaking, to help the agencies they fund better track their own impact, and in the process help themselves and their grantees become “learning” organizations that can improve their programs and operations to better fulfill their missions.
“We’re trying as a field to move beyond just counting the number of kids in after-school programs or the number of poor people who receive health-care services,” says Heather Peeler, vice president for programs at Grantmakers for Effective Organizations, or GEO, a coalition of 370 grantmaking organizations.
“We’re trying to understand how these programs make a difference in people’s lives,” she says, “and that’s complex.”
Among 700 foundations responding to a survey GEO conducted in 2011, roughly 70 percent said they evaluated their work, a percentage that was unchanged from a similar survey GEO conducted in 2008.
But among foundations that evaluate their work, about 80 percent of them seem to be using the evaluations for “accountability” purposes, with only about 60 percent using the data they collect to strengthen their future grantmaking, and only 30 percent using it to “strengthen knowledge in the field” Peeler says.
GEO believes foundations should use evaluation metrics for all those reasons, she says.
“Grantmakers should be accountable to their communities,” she says, “but more focus should be on using evaluation as a learning tool.”
The growing attention to metrics and evaluation is the result of a number of market factors, expert says.
Those factors include the desire by funders in a tough economy to better understand the return on their investment, as well as a growing number of products and services offered by vendors and consultants to help foundations and nonprofits better measure their impact.
“Funders are having fewer resources and greater demands,” says Althea Gonzalez, program manager for North Carolina for Hispanics in Philanthropy, a national organization that raises money from other funders and makes grants to groups that are led by and serve Latinos.
“We want to make sure we’re making good investments, and strategic investments,” she says.
Peeler of GEO says the number of companies and consultants offering evaluation products and services has grown in response to market demand.
“We need as many tools and resources as possible to help understand and navigate measurement,” she says. “Unlike the business world, social change isn’t about counting widgets.”
The growing popularity of evaluation also may be partly the result of foundations wanting to insulate themselves against criticism, particularly if the evaluation is “activity-based rather than outcome-based,” Gonzalez says.
But with nonprofits and funders becoming more “bottom-line-oriented,” she says, “there’s going to be an increasing need to be able to demonstrate that the investment was effective, and to tell the story of that investment.”
With the damaged economy continuing to strain the operations and finances of nonprofits, a growing number of funders have been focusing grant dollars on strengthening nonprofits’ organizational “capacity,” a focus that often includes trying to help nonprofits improve the way they measure their impact.
And measuring an organization’s impact is an indicator that can be tough to gauge, funders say.
“Capacity-building is the hardest thing to have good metrics around,” Gonzalez says. “It’s much more tenuous to say how is the board stronger this year in their leadership than, say, in a diabetes program, how many people did you serve.”
Hispanics in Philanthropy, which has raised and made grants totaling over $4 million in North Carolina since it began operating in the state in 2002, focuses its grants only on capacity-building, a focus that increasingly has included on “qualitative human dynamics” in addition to “tangible” measurements that can be easier to track, Gonzales says.
For a nonprofit that never has had a financial audit or had the capacity to conduct an audit, and that still is handling its financial records through Excel spreadsheets or handwritten notes, for example, might get a capacity-building grant from HIP to improve its financial management.
Funds from the grant might be used to provide financial training to the organization’s executive director and finance officer, buy QuickBooks accounting software, pay a consultant to set up financial-management “checkpoints” to ensure proper accounting, and hire an auditor.
“That’s tangible measurement and can catapult an organization to efficiency and measurement,” Gonzales says. “That’s an easy metric, an easy way to evaluate what happened.”
A tougher challenge would be to track the way a nonprofit, for example, addresses problems in its leadership and governance, she says.
A board and executive director might not be working together effectively, with the executive director needing to develop skills in board management, and the board needing to develop skills in managing the executive director.
So a grant might be used to hire a consultant to coach the executive director about how to work with the board, and to train the board to work with the executive director and conduct an annual performance review.
While that kind of investment is “really building major capacity,” Gonzalez says, “tangibly demonstrating” at the end of the grant that the “board is in a better place to supervise the executive director, and that the executive director can better manage the board, is difficult.”
Funders increasingly are looking for indicators they can use to improve their funding strategies.
“We don’t want to just collect data that sits on shelves,” says Marisa Allen, director of research and evaluation at the Colorado Health Foundation in Denver. “We are very intentional about using this data for decision-making and to refine our grantmaking.”
In funding programs that were trying to enroll adults and children in the state in Medicaid, for example, the foundation found some of those programs were not able to enroll a high number of Coloradans who were eligible for that coverage.
Trying to learn why, the foundation found some programs were not effective because they were “casting a wide net,” trying to enroll people at big events like health fairs, and that a more targeted enrollment strategy might be more effective.
“If a program identified a group of Coloradans who were highly likely to be eligible for public health-insurance programs, then they were able to achieve measurable results and enroll more,” Allen says.
Programs that used data on free and reduced lunch served at schools to students, the foundation found, were able to identify families eligible for public health insurance, and so were much more successful in enrolling them.
So the foundation began partnering with school systems to provide nonprofits with the names of low-income families whose children might be eligible for free and reduced lunch.
In making grants to help nonprofits build their organizational capacity, a growing number of funders also are aiming to help those nonprofits, as well as themselves, become “learning” organizations.
“We think an organization that is able to evaluate the effectiveness of its programs, and organizations that are learning organizations, are more likely to achieve mission,’ says Kevin Cain, president and CEO of the John Rex Endowment in Raleigh, N.C. “That helps us achieve our mission. And they’re more likely to be running efficiently and effectively.”
In 2010, the Endowment invited agencies it funds to participate in an intensive, peer-learning initiative to help them do a better job evaluating their impact.
Led by the Center for Creative Leadership in Greensboro, N.C., the evaluation training worked of representatives of nine participating agencies, with the agency representatives organized into three teams.
With representatives of each agency selecting a specific project to work on, the teams received evaluation coaching, returned to their agencies to put their evaluation projects into effect, then reconvened to share what they had done and get feedback from their peers and coaches.
A key strategy in the effort was to help the representatives of each agency involve other members of the agency’s staff in the evaluation process and encourage them to share information.
“There was a lot of work engaging more people than program people who have direct contact with clients, involving more people in the conversation,” Cain says. “You become more learning organizations. You start to understand that human resources has a role, the accounting department has a role.”
As a result of the process, the Endowment hopes to become “more systematic about offering evaluation support to selected organizations we provide funding for,” Cain says. “As part of the grant application process, we’d identify an organization and make sure we build in support for evaluation for the grant with this organization.”
Capacity for change
Western North Carolina Nonprofit Pathways is a collaboration in the region that supports training and organizational development for local nonprofits.
To evaluate the impact it was having on its grantees, and identify what was working and what might be improved, WNC Nonprofit Pathways a few years ago hired TCC Group, a national consulting firm.
That evaluation found that the best key to the success of an organization is its “adaptive leadership ability,” says Gonzalez of HIP, who serves on an advisory group for the collaborative.
“Their ability to respond and even proactively plan for change is what ensures they’re going to survive the changes that are inherent in nonprofits and in our economic situation,” she says. “There needs to be an ability to respond quickly and well to what’s happening, rather than trudging forward to what the status quo is.”
The way nonprofits “train for those qualities, and assess the effectiveness of the training afterwards, is the key,” she says.
So funders should be working to get nonprofits to start thinking about evaluation, and helping them get the evaluation training they need.
“A nonprofit often thinks of activity as the end point of evaluation, rather than the impact,” Gonzalez says. “Getting them to understand what evaluation really is, and to put those measures into place, is a huge deal. It’s capacity-building in and of itself for nonprofits.”
Investing in evaluation
Funders that support efforts to build the evaluation capacity of nonprofits agree that evaluation requires an investment in training and the staffing and tools nonprofits need to be effective in measuring their impact.
Evaluation does not lend itself to a “one-size-fits-all” approach, and there are “a lot of ways in which we should answer the questions, ‘Are we making a difference and can we do it better,’” says Peeler of Grantmakers for Effective Organizations.
Funders and nonprofits also need to be “realistic about your expectations for measurement and what’s reasonable,” she says.
The best approach grantmakers can take, she says, is to recognize the true coasts of running programs, including the cost of evaluating them..
“They can provide flexible funding to their grantees in terms of general operating support so nonprofits can make the best decisions about how to direct resources,” Peelers says, and they can provide multi-year support, recognizing that change is hard and takes a long time, often years.”
The return on that investment in evaluation can be nonprofits that do a better job delivering programs, communicating their impact, and raising money to support those programs.
“As the economy continues to languish and doesn’t really improve, the competition to get out there and get the dollars that are available becomes even more difficult,” says Riemer of the Hartford Foundation for Public Giving.
“Providing agencies with a tool like Building Evaluation Capacity, which allows them to tell a story and make adjustments to how they deliver services,” she says, “that’s giving agencies an important resource for effectiveness and sustainability.”