Behind the Impact: Four Pillars for Monitoring and Evaluating the Social Bottom Line

By Salesforce.org | April 10, 2018 | Nonprofit, Program Management

Behind the impact blog

By Zak Kaufman, Co-Founder and CEO at Vera Solutions.

Let’s talk about the bottom line. In the private sector, successful companies use real-time data on revenues, costs, and profit to drive efficiencies and continually improve their products. In the social sector, meanwhile, we scramble, grasp, approximate, and even fudge to get a mere glimpse of our bottom line, to showcase our ‘impact’. The difference is the management of bottom-line data: companies treat it as essential lifeblood, nonprofits often see it as a cross to bear. But what do we mean by ‘impact measurement’?

While public health experts, development economists, and social impact investors don’t agree on everything, most do agree that impact measures need to consider:

  • A dimension of scale/reach (i.e. how many individuals, households, widgets)
  • A dimension of depth/effect (i.e. how much change observed per individual, household, widget, usually considering a baseline and a counterfactual)
  • A dimension of time (i.e. over what period is/was the change observed)

To measure these dimensions, we collect, capture, manage, and analyse monitoring and evaluation data.

Differentiating between monitoring and evaluation

Ten years ago, I moved to South Africa to oversee research, monitoring, and evaluation for Grassroot Soccer’s sport-based HIV prevention program. We ran three randomised controlled trials to evaluate the program’s effects in Zimbabwe and South Africa, conducted dozens of focus groups with program staff, students, and teachers, and overhauled a labyrinth of Excel spreadsheets to put in place a robust, Salesforce-based program monitoring system tracking the program’s outputs and outcomes for over 100,000 youth per year. Peer organisations often told us enviously “you guys have great M&E!” In fact, though, we got to a clear social bottom line by focusing separately on M and on E.

Monitoring programs is effectively like taking care of a child day-to-day: it requires consistency, discipline, routine; it’s much easier once you have established processes and systems, and the lessons learned typically result in micro-adjustments. Evaluation, meanwhile, is like taking your child to the doctor for periodic check-ups: it requires extra time, money, and willingness to accept bad news; it’s especially critical early in life, and the lessons learned sometimes result in major adjustments. Because monitoring and evaluation revolve around different activities and data, the only way to have “great M&E” is to separately have “great M” and “great E”. The social sector has, probably by accident, done itself and the world a disservice by conflating the disparate functions of monitoring and evaluation into the nebulous non-thing of “M&E”. As more and more programs, organisations, and companies seek to optimise for impact, let’s bust up the buzzwords and break down the bottom line.

Four pillars for great monitoring and evaluation

Grassroot Soccer’s success and the numerous other projects we’ve worked on have hinged on these four pillars:

1. Sound strategy: The first step to a strong bottom line is defining a sound Theory of Change and Logical Framework, along with SMART indicators that will provide an objective sense of success or failure.

TIP: Good frameworks clearly define outputs (tracking scale/reach) and outcomes (tracking depth/effect), stemming from a program’s activities and connected to its overarching goal(s).

2. Appropriate Tools/Methods: Whether for daily attendance tracking or three-year outcome evaluations, we need suitable, consistent instruments for collecting data on our defined outputs and outcomes. When measuring effects, we need to use appropriate methods that minimise selection and information bias and suit the level of evidence we seek to generate.

TIP: Make tools as simple as possible: complexity undermines adoption.Technoserve uses CommCareHQ and Taroworks, integrated with a Salesforce back-end, to track day-to-day program data across East Africa and Latin America.

3. Strong Systems: Once tools are developed and standardized, we need processes and a platform to manage day-to-day data flow and analyse results. Our systems need to evolve as our programs evolve, so flexibility is paramount.

TIP: Design for and with the end user, making systems so intuitive that non-technical staff can get a clear sense of the bottom line and easily drill down from global to national to local to individual levels.

A dashboard from the Aga Khan Foundation’s Global Reach System, built on Amp Impact, a Salesforce-based program and indicator management app

4. Sufficient Capacity: We need people with the right skills and sufficient time to collect, capture, manage, analyse, and utilise data effectively.

TIP: Don’t underestimate the number of capable-person-hours required for each function: if you don’t have enough, you either need to train or hire.

Great monitoring requires different tools/methods, systems, and capacities than great evaluation. By focusing separately on each and ensuring strength across all four pillars, any organisation can make bottom-line measurement its lifeblood and use it to drive stronger programs and greater impact.

Interested in learning more about defining and measuring your impact?

watch webinar

Guest Author: Zak Kaufman is Co-Founder and CEO of Vera Solutions, a global social enterprise helping social sector organisations use cloud and mobile technology to better understand their impact and streamline their operations. Since 2010, Vera Solutions has worked with over 240 organisations in 45+ countries.