The Afterschool Alliance is pleased to present the second installment of our “Evaluating afterschool” blog series, which turns to program providers in the field to answer some of the common questions asked about program evaluation. Be sure to take a look at the first post of the series, which explores evaluation lessons from Dallas Afterschool.
This post is written by Jason Spector, senior research & evaluation manager for After-School All-Stars, a national afterschool program serving more than 70,000 low-income, at-risk students across 11 states and the District of Columbia.
I recently left a meeting thinking I’m no longer doing the job I was hired to do. But for a professional evaluator of afterschool programs, change is a good thing.
When I joined After-School All-Stars (ASAS) to launch our national evaluation department two and a half years ago, my primary goal was to measure and support ASAS’ outcomes as the organization entered into an expansion phase. While I currently maintain this responsibility, our national evaluation team is now focused on examining program quality as opposed to outcomes measurement. Why the change? Simply put, we realized our top priority was to boost our quality, because when we do, the impact and outcomes will follow.
This type of a shift is not an easy decision for a nonprofit to make. As nonprofits move toward more advanced outcomes measurements to satisfy increasingly savvy funders, leaders everywhere are faced with some critical questions:
- Should I deepen my organization’s investment in evaluation?
- What can I expect to receive in return?
These questions carry an assumption that an investment in evaluation is inherently not an investment in your organization’s mission and programs. Furthermore, many program leaders assume that evaluations must yield large positive outcomes in order to attract new funders and compensate for the “cost” of not putting dollars directly into program operations. But this logic fails to consider the many benefits evaluations afford organizations.
Here are a few benefits of program evaluations
- Define your programs: The act of identifying specific goals and mapping your program practices and strategies onto these goals serves to clarify your program model and identify potential gaps.
- Establish research and data-driven practices: Evaluation can guide you in modifying your programming to make it research-driven. Evaluation can also implement and improve data feedback systems to inform program practices.
- Identify areas for improvement: Negative findings are opportunities for growth and refinement. Reflecting on program practices and making adjustments around curriculum, pedagogy and quality can ensure your organization is living up to its mission.
More evaluations considerations
In determining whether evaluation is worth the investment, and what type of benefit you will derive from it, nonprofit leaders must first consider their organizational stage of development on several levels. The Corporation for National and Community Service and the Social Innovation Fund have done excellent work in fully laying out many questions to consider. Here’s a summary of just a few:
- Organizational considerations
- Does your leadership and Board support evaluation capacity-building?
- Does your organizational culture support data-informed decision-making, including around programs?
- Programmatic considerations
- Do you have a fully developed theory of change, with defined activities linked to expected outcomes that program staff and leadership agree upon?
- Is your programming stable and operating at sufficient scale to rigorously evaluate?
- Evaluative considerations
- Have you previously conducted process or outcomes evaluations?
- Do you have defined processes to regularly collect data?
- Do you have the capacity and willingness to work with an external evaluator?
As you examine your organization’s evaluation readiness, you may come to the conclusion that an impact evaluation does not make sense at this time. This doesn’t mean that evaluation can’t help push your organization forward. Consider the potential of evaluation to hone your model, further data-driven programs, and yes, even prompt adjustments based on negative findings!