Week 8: Guest blog: Innovation in development evaluation

Feature2014-week8.png

Development aid is changing rapidly – so must development evaluation.

This is the second post in our series of innovation in development evaluation. Thomas Winderl, an evaluation consultant and co-author of ‘Innovations in monitoring and evaluating results’ explains why evaluation needs to keep pace with an increasing understanding of complexity in development planning, why multi-level mixed methods will be the new norm, and why evaluators need to get more imaginative about primary data collection. Read part one in this series here.

1. Planning in a complex, dynamic environment requires more and different evaluations

Linear, mechanistic planning for development is increasingly seen as problematic. Traditional feedback loops that diligently check if an intervention is ‘on-track’ in achieving a pre-defined milestone do not work with flexible planning. In their typical form (with quarterly and annual monitoring, mid-term reviews, final evaluations, annual reporting, etc.), they are also too slow to influence decision-making in time.

A new generation of evaluations is needed: one which better reflects the unpredictability and complexity of interactions typically found in systems; one which gives a renewed emphasis to innovation, with prototypes and pilots that can be scaled up; and which can cope with a highly dynamic environment faced by development interventions.

This is an exciting opportunity for monitoring and evaluation to re-invent itself. With linear, rigid planning being increasingly replaced by a more flexible planning approach that can address complex systems, we find now that we need more responsive, more frequent, and ‘lighter’ evaluations that can capture and adapt to rapidly and continuously changing circumstances and cultural dynamics.

To do this we need two things: firstly, we need up-to-the-minute ‘real-time’ or continuous updates on the outcome level; this can be achieved by using, for example, mobile data collection, intelligent infrastructure, or participatory statistics that can ‘fill in' the time gaps between the official statistical data collections. Secondly, we need to use broader methods that can record results outside a rigid logical framework; one way to do this is through retrospectively ‘harvesting outcomes’, an approach that collects evidence of what has been achieved, and works backward to determine whether and how the intervention contributed to the change.

2. Multi-level mixed methods become the norm

Although quantitative and qualitative methods are still regarded by some as two competing and incompatible options (in his recent blog, Michael Quinn Patton compared them to two-year olds not yet able to play together), there is a rapidly emerging consensus that an evaluation based on a single method is simply not good enough. For most development interventions, no single method can adequately describe and analyze the interactions found in complex systems.

Mixed methods allow for triangulation – or comparative analysis – which enables us to capture and cross-check complex realities and can provide us with a full understanding, from a range of perspectives, of the success (or lack of it) of policies, services or programmes.

It is likely that mixed methods will soon become the standard for most evaluations. But using mixed methods alone is not enough; they should be applied on multiple levels.

Graph: Example of a multi-level mixed method to evaluate a language school

Source: adapted from Introduction to Mixed Methods in Impact Evaluation, Bamberger 2012, InterAction/The Rockefeller Foundation, Impact Evaluation Notes, No. 3, August 2012

3. Outcomes count

There is broad agreement that what ultimately counts - and should therefore be closely monitored and evaluated - are outcomes and impact. That is to say, what matters is not so much how something is done (=outputs, activities and inputs), but what happens as a result. And since the impact is hard to assess if we have little knowledge on outcome results, monitoring and evaluating outcomes becomes key.

There is one problem, however: by their nature, outcomes can be difficult to monitor and evaluate. Typically, data on behaviour or performance change is not readily available. This means that we have to collect primary data.

The task of collecting more and better outcome-level primary data requires us to be more creative, or even to modify and expand our set of data collection tools.

It will no longer be sufficient to rely on non-random ‘semi-structured interviews with key stakeholders’, unspecified ‘focus groups’, and so on. Major primary data collection will need to be carried out prior to or as part of an evaluation process. This will also require more credible and more outcome-focused monitoring systems.

Thankfully, there are many tools becoming available to us as technology develops and becomes more widespread: Already small, nimble random sample surveys such as LQAS are in more frequent use. Crowdsourcing information gathering or the use of micro-narratives can enable us to collect data that might otherwise be unobtainable through a conventional evaluation or monitoring activity. Another option is the use of ‘data exhaust’: data which is passively collected from people’s use of digital services like mobile phones and web content such as news media and social media interactions.

Table: Eleven innovations potentially useful for innovative development evaluations

Innovations Overview
1. Crowdsourcing A large number of people actively report on a situation around them, often using mobile phone technology and open source software platforms
2. Real-Time, Simple Reporting A means to reduce to a minimum the formal reporting requirements for programme and project managers and fee up their time to provide more frequent, real-time updates, which may include text, pictures, videos that can be made by computer or mobile devices
3. Participatory Statistics An approach in which local people themselves generate statistics; participatory techniques are replicated with a large number of groups to produce robust quantitative data
4. Mobile Data Collection The targeted gathering of structured information using mobile phones, tablets or PDAs using a special software application
5. The Micro-Narrative The collection and aggregation of thousands of short stories from citizens using special algorithms to gain insight into real-time issues and changes in society
6. Data Exhaust Massive and passive collection of transactional data from people's use of digital services like mobile phones and web content such as news media and social media interactions
7. Intelligent Infrastructure Equipping all - or a sample of - infrastructure or items, such as roads, bridges, buildings, water treatment systems, handwashing stations, latrines, cookstoves, etc., with low-cost, remotely accessible electronic sensors
8. Remote Sensing Observing and analyzing a distant target using information from the electronic spectrum of satellites, aircrafts or other airborne devices
9. Data Visualisation Representation of data graphically and interactively, often in the form of videos, interactive websites, infographs, timelines, data dashboards, maps, etc.
10. Multi-level Mixed Evaluation Method This approach includes the deliverate, massive and creative use of mixed (quantitative and qualitative) methods on multiple levels for complex evaluations, particularly for service delivery systems
11. Outcome Harvesting An evaluation approach that does not measure progress towards predetermined outcomes, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change

Source: Discussion Paper: Innovations in Monitoring & Evaluating Results, UNDP, 05/11/2013

So there is good reason for optimism. The day will soon come when it is standard practice for all evaluations to be carried out by mixed methods at multiple levels, with improved primary data collection enabling us to evaluate what really counts in our interventions: the outcomes and the impact.

Coming up next week: Innovations in advocacy evaluation and views from the African Evaluation Association conference on innovation in evaluation.

 

Image: PT Bambu - Ibuku Green School

'Week 8: Guest blog: Innovation in development evaluation' is referenced in: