By: Morgan Buras-Finlay, Measurement & Evaluation Manager, Technology & Impact, Salesforce.org and Eric Barela, Director, Measurement & Evaluation, Salesforce.org
Nonprofits do difficult and really important work, and it’s getting more and more crucial for them to be able to demonstrate what they are doing and the impact they are having. As a nonprofit professional, you probably want to tell others about the great work your organization is doing, and make sure the work is happening as planned. To do this, you are going to need data, and you are going to want that data to be good quality, so you can stand behind what you are saying and doing.
However, despite your best-laid plans, you may find there are still issues with your data. Maybe it’s that you and your team don’t think you can trust your data, the data doesn’t represent the work right, or it doesn’t help you make decisions like you want it to.
The good news is there are a few things you can do with your data in your everyday program management activities that will make it easier for you to assess your impact later.
Data Quality Blind Spots and How To Find and Fix Them
Whether you’re collecting data from a paper form, an electronic record, or a survey, you have no doubt encountered myriad issues that call the data into question. You may have encountered some or all of the following pitfalls when gathering impact data.
- Completeness & recurring gaps – You are missing data because forms have not been filled in completely! Not only that, you notice that the same 2 questions are consistently being skipped or are being answered all wrong.
- Accuracy – You have all the data, but some of the responses just don’t jive with reality. How can a kindergartener in 2019 be born in 1995?
- Representativeness – You sent a survey to all of your donors, but you notice that a segment of your donor base is wildly under-represented, despite making up a large portion of your donors. Where did they go? Are certain types of people more likely to answer your survey or complete for your form? If so, why might that be?
- Consistency – You notice dramatic differences in how middle and high school students answered a particular set of questions. You see similar issues between students who answered questions in the dominant language of the region, and those who answered in the minority language. You’re not sure that everyone understood each question in the same way.
- Usefulness – Your team thought it would be interesting to add more questions to the form, and you’ve ended up with a long form, without a clear plan for how data from these questions will be used. You find that staff or participants are rushing to complete it or abandoning it all together because it is just too long.
Recommendations for Data-Driven Program Management and Impact Measurement
Data quality is most often a reflection of work process – what staff are doing and how they are doing it. This is a good thing, because it means there are some solutions that you can incorporate into how your organization does business to uncover problems and fix them before they get out of control.
- Journey mapping – Have staff walk you through the steps they take when they collect, enter, and record data. This is a great way to find data quality issues and, more importantly, their root causes. While most staff will regularly monitor their own data quality, everyone has a bad day every now and then. Periodic conversations like this can be an organization’s data quality safety net. While this can be done through observation, or one-on-one conversations, having a few staff go through this activity together will be the most effective in uncovering major pain points and can spur important discussions about how work gets done.
- Know whom you are targeting when conducting surveys – If you are conducting a survey, get to know demographics or characteristics of the whole group before surveying. Census data can be useful here, if you need to know how your group differs from the community at large, but so can the program management data you’ve been collecting. Include a few demographic questions in your survey so that you can see if certain groups are over (or under) represented. For example, if you know that 50% of your program participants come from a particular cultural background, you would expect the proportion of survey respondents from that cultural background to be around 50%.
- Take your forms and surveys for a test drive – Ask staff and key representatives of your target group to review the questions you want to ask, in the format that you want to use (paper/electronic etc.). This way, you can ensure that the right questions are being asked in the right way, and in consistent language. You’ll know right away if questions are confusing, if there are too many, or if the method is not a good fit.
- Host a data party! Provide snacks and invite community members and staff to review results and data. Invite them to share their perspective on whether or not this data is jiving with their experience – and why. This type of activity also helps to build enthusiasm around data and encourage staff to use it for decision-making. Who doesn’t love that?
When you collect data, you’re not doing it for data’s sake. You’re doing it because your organization wants to learn something – what is being done, what is being done well, and what can be done better. And you’re doing it so you can communicate your impact. If your organization is making a difference, making sure your data is what it needs to be will allow you to shout your results from the rooftops with confidence!
For more on impact measurement, get this e-book.
About the Authors
Morgan Buras-Finlay is passionate about helping nonprofits make this new data-driven world work for them. Holding a Master’s in Social Work, Morgan is a career nonprofit professional. Starting out as a clinician and case manager, she transitioned to measurement and evaluation, where she has been able to combine her commitment to social justice with her strong belief in the power of data and research. She is dedicated to helping nonprofits bridge the technology divide and elevate their use of data for service delivery and decision-making so they can provide the best possible services and resources to their communities.
Eric Barela is committed to making sure organizations have useful data to determine their impact. He has over 15 years of experience guiding organizations, from school districts to nonprofits to foundations, on effective impact data collection and reporting. He works to generate credible knowledge through the application of culturally competent evaluation techniques. Eric is currently a Board Member of the American Evaluation Association.