Once you’ve decided impact management is important to your organization, where do you start implementing it? Start with a simple self-assessment to understand where you and your organization fall along the impact maturity scale: what tools and processes do you already have in place? What works and what doesn’t?
When you’re ready to start implementing impact management strategies in your organization, start with this simple two-step self-assessment.
1. What Evidence Should I Be Collecting?
Building from your theory of change, it is crucial for organizations to have a comprehensive, clear set of metrics (also called “indicators” in the social sector) that track inputs, activities, outputs, and outcomes; all of these build evidence of your program’s effectiveness. These metrics should measure what really matters rather than simply what is possible or easy to track.
For example, in a mentoring program, one short-term outcome is “student and mentor develop a positive relationship.” How might you know whether this is the case? You might:
- Ask a simple yes/no survey question (“Do you and your mentor/mentee have a positive relationship”?)
- Ask a more nuanced question (“How positive would you say your relationship is with your mentor/mentee on a five-point scale from very negative to very positive”?)
- Deliver an assessment by a trained observer who sits in on mentoring sessions and uses a rubric to rate the relationship on positivity and other dimensions.
2. Think Seriously About What’s Practical.
Even a metric that aligns perfectly with your theory of change won’t be useful if you can’t collect good data — or if collecting good data would take more resources than you can devote to the task.
Consider factors such as:
- The organization’s technical capacity.
- If you’re considering a survey-based measure, do you have access to an online survey tool, or a way to collect paper surveys and put the data into a form that you can analyze?
- If you want to collect data about how often mentors and mentees meet, do you have a way of capturing that information, such as client management software?
- The organization’s methodological and analytical capacity. Some metrics require specialized training or techniques to collect, analyze, or interpret.
- For example, selecting a metric for “positive student/mentor relationship” that relies on observations would require creating a rubric (or finding an existing tool), establishing what ratings constitute a “positive” relationship, and conducting training and quality control to ensure that observers’ ratings are consistent.
- For a program implemented in multiple school districts, states, or countries, selecting a metric for student academic performance might require skills related to organizing and transforming data collected using different assessment tools so that it can be meaningfully compared.
- Data access.
- Will your organization realistically be able to acquire the data needed? This is an important consideration for metrics that rely on data collected or managed externally by government or other organizations.
- Are the appropriate data sharing agreements in place, if necessary?
- What would the consequences be if this other organization changes its approach?
- Participant perspective. The relevance and meaning of the metric to participants is an important theoretical consideration, but it may also be a practical one. If the people the organization seeks to serve don’t see the relevance of a measure or a tool, response rates may be lower or the quality of the data may be poorer.
When evidence building is continuously practiced and key learnings shared publicly, other organizations can more easily learn from and adapt their programs, practices, and activities.
Read more about how to implement impact management practices in your organization.
About the Author
Sr. Manager Impact Innovation Strategy