Week 2: Evaluation methods for large, complex, global initiatives

By
Simon Hearn
a flock of birds flies over a body of water by a pier

My 2014 evaluation events calendar was launched in earnest this week with a workshop hosted by the US Institute of Medicine focusing on evaluation methods and considerations for large-scale, complex, multi-national, global health initiatives - such as the Global Fund or PEPFAR.

I was invited by the organisers to present the BetterEvaluation Framework to frame the discussions of the two-day event, and together with Patricia, greet and other BetterEvaluation colleagues we developed an engaging presentation.

The meeting was convened following the completion of a number of evaluations of this type of initiative in order to share the learning from those, bring in other perspectives and think more critically about the gains and trade-offs of different evaluation options. The four evaluations that formed the focus of the workshop were:

  1. Affordable Medicines Facility - malaria (AMFm)
  2. President’s Emergency Plan for AIDS Relief (PEPFAR)
  3. President’s Malaria Initiative (PMI) (PDF, 273KB)
  4. Global Fund to Fight AIDS, Tuberculosis, and Malaria (GFATM) (PDF, 1.0MB)

This was a fantastic opportunity for BetterEvaluation to work with some very talented and influential evaluators and commissioners of evaluation on a very pertinent issue: how do you manage and design very big evaluations of quite complex initiatives? This is a popular topic at the moment with USAID publishing a new discussion note on complexity aware monitoring.

Our immediate reaction to this challenge, and the opening of my talk, was to pose two important questions: What is a complex initiative? What are the challenges with evaluating such initiatives?

Rather than assume that an initiative is either complex or not, we discussed six dimensions that might vary in their complexity (Funnell & Rogers, 2011). The first three relate to the nature of the intervention and the latter three to how it works:

Focus – Are the objectives defined or emergent objectives?
Involvement– Is there clear governance or is it flexible and shifting?
Consistency – Is there a standard delivery or does it adapt in context?
Necessariness – is the intervention the only way to achieve the intended impacts? And is it possible to know all other pathways?
Sufficiency – can the intervention produce the intended impacts by itself? And if not, can we predict the external factors required?
Change trajectory – What is the relationship between inputs and outcomes, and how does it change over time?

We then suggested three challenges for evaluating this type of initiative:

  1. Describing what is being implemented, when it varies across multiple sites, programme components and over time.
  2. Getting data about what impacts have been achieved, given time lags before they become evident.
  3. Attributing the impacts to a particular program, or the activities of particular partners, given the multiple factors affecting impacts.

Which then brings us to the Rainbow Framework which explicitly helps address these particular challenges. We emphasized tasks around developing programme theories, combining quantitative and qualitative data, understanding causes, and synthesizing results - all of which need particular attention in complex evaluations.

There is of course still the problem of deciding between the many options presented in the Framework but luckily the BetterEvaluation team has been busy developing a new Start Here page with advice on how to make decisions about methods.

And we have launched a new thematic page on complexity where you will find lots of new resources on how to think about this and what it might mean for evaluations.

You can download the slides for this presentation below:

 

Related content

'Week 2: Evaluation methods for large, complex, global initiatives' is referenced in: