Why wait until something is broken to check it out, when repairing a crack or weakness could prevent the break?
"Something here is working. Intuitively, we have a good idea about what is and is not working," the executive director of a youth-services agency says. "We just don't know the proper dosage. Something we think is of no importance may be really important."
"It is increasingly important in a social service agency to have a handle on efficacy," the deputy director of the same agency says. "This work is hard. You have to know what success looks like, you have to go beyond the warm and fuzzy. You have to know what you are achieving."
"We've always been interested and committed to evaluation," a board member says. "We just didn't necessarily have the bandwidth, time, etc. to address it. We also have had a sense that our results are excellent and it would be to our benefit to confirm that."
In other words, find out where the cracks are.
In the process of knowing, the deputy director says, the work will improve.
Her mandate has motives both internal to the organization -- find out what works -- and external -- show the world: parents, partners and, especially, funders.
Although not the main goal in setting up the evaluation program, the views of funders who want hard data were important, and, in the end, the program probably paid for itself in new funding, the board member says.
"You have to know what you are achieving," the deputy director says. "You can evaluate anything."
The quantitative measures for her agency looked good: high participation, long-term participation, and better results at school.
The question were: Why? What elements of the program were producing these results? And were there other results that could or should be achieved?
Enter evaluation... and its sometimes strange results.
First came the consultant, then came the participation of all those involved in the programs: teachers, administrators, partners, clients. In fact, that's the name of the process: participatory evaluation.
It's a little like dissection, although the proper term is "logic model." With the guidance of the consultant, the staff breaks apart desired outcomes, activities and resources. What are you putting in? What are you getting out? What does that desired outcome really look like? How can you measure it?
In the course of such dissection, the group contributes to strategic planning as well as to the development of evaluation tools. "You have to know what you want to accomplish before you can evaluate it," the deputy director says.
The board member agrees. "The board pushed for evaluation to be done, on the board as well. [Evaluation] informs certain decisions on programming, on what to expand, where to apply additional resources."
And strange things happen on the way to understanding. The deputy director cites the example of a survey of clients regarding their ability to work with others. The "before" test showed that clients had a high level of confidence in their ability to be team players. The "after" test showed a loss of confidence. That was not the hoped-for outcome. Surely, the program improved interpersonal skills!
A focus group was convened wherein it was revealed that the loss of confidence stemmed from a greater understanding of what cooperation and teamwork mean. Ability was over-rated in the beginning, the deputy director says. The program had increased understanding of truly supportive teamwork; more modest self-appraisal resulted.
"When you get results you don't understand, you do focus groups," she says.
In effect, the evaluation process is itself evaluated until methods of measuring become tools for achieving excellence, tools that clearly show "successful" and "needs-tweaking."
For youth programs, good evaluation is especially important, the director says. "There is pressure to show academic gains but when you look at a logic model, you see that we are addressing more than the elements of academics. We are looking at the tools for life."
"The tools of life" may seem intangible but they are made up of discrete, measurable skills, skills that can be evaluated.
Nothing is trouble-free
"It's challenging to develop tools and get staff to buy in and use the assessment tools," the executive director says. "The biggest problem is one of capacity." That means time and people, as well as the knowledge, to really understand what should be happening in each program. Hence the importance of participatory evaluation, through which all involved contribute ideas and shape outcomes.
Equally important is funding for ancillary personnel to relieve staff of administrative duties so staff has time to both develop and use the new tools. If dissecting and evaluating programs is added to the burden of already overburdened staff, the evaluation program will fail.
Additional staff, like the guidance of a consultant, takes funding.
At the top of the list of donors were board members. Every member of the board donates to the agency, showing both constituents and funders that the board believes in what others are being asked to support.
Then the agency went about raising unrestricted funds through benefits and sponsorships.
"If you do the job well, you will wean yourself away from the consultant," the deputy director says. Ideally, the consultant trains the staff in how to do the process. "You set it up so you can re-use the process and re-evaluate," she says. As client needs, the composition of the community, and the size of the agency change, new evaluation tools can be developed in-house.
In part, momentum will be maintained by a new staff member, the deputy director said, one whose job will be to manage evaluation and keep it going as an integral part of the organization.
By integrating evaluation into the structure of the organization, staff will realize that spending time to find out what is working, and fixing what is not, contributes to their own satisfaction as much as it does to the excellence of the agency.
The board member is a confirmed supporter of thorough evaluation and believes that funders should be as well. "Funders," he says, "can continually press to see certain quantitative support for assertions made." His agency will have that data.
Although the new evaluation system has been in use for little more than a year, programs have already been changed, partnerships have expanded and new opportunities have been grasped. When the agency can demonstrate tangible results, the executive director says, it can tap new funding sources and eventually expand to serve those on the waiting list.
Again, the board member concurs. "We use the results in fundraising and recruitment," he says. "The results will also inform the board's strategic plan by identifying where strengths and weaknesses are."
The consensus seems to be that thorough, ongoing evaluation will keep the program strong and so flexible that it won't break in the future when changes in constituency, demographics of the neighborhood, funding or staff take place, as they inevitably will.
Fix it before it's broken becomes the mantra.
Return to Organizational Evaluation