Week 7: Innovation in evaluation

By
Patricia Rogers
Basil sprout

This is the first in a series of blogs on innovation which includes contributions from Thomas Winderl and Julia Coffman.

The series will lead up to the African Evaluation Association conference at the beginning of March in Yaounde, Cameroon, where BetterEvaluation will be sponsoring a strand on methodological innovation.

We consider innovations to be methods and approaches to evaluation that are actually “new”.  They are not simply a relabelling of existing knowledge with a new, proprietorial label. Many people come into evaluation without formal training, or with training that does not provide a good understanding of the range and scope of evaluation practice and theory.  So sometimes, they claim that their approach is innovative (such as using a mix of qualitative and quantitative data in a single evaluation) when in fact it is well-established as good evaluation practice.

Invention, bricolage and translation

It’s worth thinking about different types of innovation.  Some innovations in evaluation involve invention of new technology.  The possibilities that big data present in terms of tracking events through social media or geo-tagging were simply not possible in earlier years when these technologies were not available.

Some innovations are a bricolage, or a patchwork, of previous ideas and techniques brought together more coherently and used  more systematically. For example, Collaborative Outcomes Reporting technique brings together existing methods of contribution analysis, data trawl and expert panel into a package that makes these pieces fit together in a more accessible way.

Some innovations involve borrowing ideas and methods from other disciplines and professions.  Approaches to causal inference for evaluation have been imported from agricultural science, clinical trials, public health,  political science, law and history. Different ways of doing interviews have been borrowed and adapted from anthropology and market research.  All evaluators can contribute to this innovation by bringing across techniques from their primary discipline or from other aspects of their lives (Stephanie Evergreen has recently demonstrated this in her use of fortune cookies to communicate evaluation results).

And some innovation is about learning from practice and thinking about a new role for evaluators.  Rather than seeing evaluation (and the work of evaluators) as being something that comes along after a programme has been designed, and sometimes after it has been implemented, and trying to add value to later decisions, there is increasing interest in how the process of evaluation, and the work of evaluators and others doing evaluation, can contribute to ongoing improvements in implementation, and to improved planning and design up front.  Real-time evaluation, developmental evaluation and positive deviance are examples of approaches to evaluation that support improved planning and ongoing learning.

Worthwhile innovation

What sorts of innovation are actually new - and useful? Where is innovation most needed - and where is it a distraction from doing the basics well?  –

Good innovations add value.  The growing interest in applying complexity ideas, for example, has arisen because they can help us understand and improve programmes and policies, not because they are trendy. Big data is becoming popular because it can provide insights not available through other means. 

This means that innovation is likely to be most helpful where existing knowledge is not enough to do what is needed.  Identifying these areas is therefore an important part of supporting effective innovation.

Supporting innovation

Innovation is hard.  It is not always clear what should be done and, when applying something that hasn’t been done before, we need to anticipate that it may not work.  Supportive structures  (and the right expectations) are needed for systematic experimentation and learning. 

Two current projects illustrate some ways of systematically supporting innovation in evaluation.

The Australian Department of Foreign Affairs and Trade (DFAT) is supporting the Methods Lab, in collaboration with ODI and BetterEvaluation. It is experimenting with a variety of methods for improving impact evaluation, developing and trialling materials to guide the selection and implementation of different methods.  

The Office of Learning, Evaluation and Research (LER) in the Bureau for Planning, Policy and Learning in USAID has begun a process of experimenting  with complexity-aware monitoring.  It has produced a discussion note outlining four possible methods, which also includes suggestions for systematically experimenting with new methods. 

Join us as we explore innovation in evaluation over the next few weeks, including live tweeting from Yaounde on #AfrEA14. We look forward to hearing  your experiences, suggestions and questions.

 

Resources

Collaborative outcomes reporting

Collaborative Outcomes Reporting (COR) is a participatory approach to impact evaluation based around a performance story that presents evidence of how a program has contributed to outcomes and impacts, that is then reviewed by both technical experts and program stakeholders, which may include community members.

Big Data

Big data refers to data that are so large and complex that traditional methods of collection and analysis are not possible. It includes 'data exhaust' - data produced as a byproduct of user interactions with a system.

Methods Lab: Improving practice and building capacity for impact evaluation in DFAT

Between 2012 and 2016, the Methods Lab will develop, test and seek to institutionalise flexible, affordable approaches to impact evaluation. Cases will be chosen that are harder to evaluate due to the complexity of their context or diversity of intervention variables. The Methods Lab approach combines a hands on ‘learning-by-doing’ style with commissioning and implementing agencies, mixed methods for collection and analysis, providing guidance to ensure rigorous thought, sharing experiences via an international platform (Better Evaluation) and institutionalising processes and best practices. The case focus and Methods Lab approach, while focusing on DFAT and its programmes, will ensure that results will have wide applicability.

Discussion Note: Complexity Aware Monitoring

PPL's Office of Learning, Evaluation and Research (LER) has developed a Discussion Note: Complexity-Aware Monitoring. This paper is intended for those seeking cutting-edge solutions to monitoring complex aspects of strategies and projects.

 

Image source: Light Bulb Basil, by Johannes H. Jensen

'Week 7: Innovation in evaluation' is referenced in: