Mixed methods in evaluation Part 3: Enough pick and mix; time for some standards on mixing methods in impact evaluation

Tiina Pasanen's picture 1st August 2013 by Tiina Pasanen

In our third blog on mixed methods in evaluation, Tiina Pasanen from ODI focusses in on impact evaluations (IEs) – a specific type of evaluation with a lot of attention in international development right now, with hundreds being conducted every year. The clear majority of them are based on quantitative data and econometric analysis. There is much talk about the importance of combining methods to triangulate results and to better understand why something works, but in reality these mixed methods IE designs are still rare and are often failing to provide enough information for readers to follow and assess what has been done and why. As the number of mixed methods IEs is likely to grow in the next few years, should there be minimum standards as to what constitutes as a mixed methods design?

Read the part 1 and part 2 of the mini-series on mixed methods.

Experimental and quasi-experimental IE methods, which promise to reveal ‘what really works’, have proven to be especially appealing to donors in the past ten years. Finally some accountability and clear numbers to convince the public that the money is not wasted and to prove that people’s lives have changed.

However, evaluations which only conclude that a treatment group’s enrollment rate rose 2% compared to a control group aren’t enough anymore. Lately the attention has shifted to ‘why and how it works’ putting emphasis not only on recording the impact but actually on exploring how and why it happens (or doesn’t happen) and the cultural and contextual factors hindering or reinforcing the impact. Understanding why a cash transfer programme couldn’t increase enrollment rates for girls as expected is much more relevant and useful than just knowing that it didn’t.

This has led (or at least contributed) to the growing emphasis on the role of theory of change in IEs to overcome the limitations of ‘black-box’ type evaluations. ‘Why’ questions have also steered the discourse towards combining methods and bringing qualitative methods along formal econometrics.  Qualitative methods are usually seen better at understanding processes, beliefs, attitudes, behaviour, social relationships and institutional dynamics, all of  which play a huge role of actually explaining and making sense of the observed changes and development.

But has the situation really changed?

A quick look at the current IE databases reveals a different picture. Less than 3% of 3ie’s IE database are  classified as mixed methods designs. J-PAL’s database, naturally concentrating only on RCTs, doesn’t have a category for mixed methods (though some of the studies include additional methods) and the same goes for the World Bank’s IE database which only categorises experimental and quasi-experimental techniques.

Of course this is not the whole picture. Databases don’t include every IE, especially ones conducted by NGOs, consultancies and those not using formal IE methods but still robustly exploring ways to link outputs to impacts, such as many participatory impact assessments.

There are well-documented IE studies which clearly explain which evaluations questions different methods are answering, how the sampling was done and highlight synergy obtained by using the mixed methods approach. For example, Adato’s study of Conditonal Cash Transfer programmes in Nicaragua and Turkey combines ethnographic and quasi-experimental methods and not only provides explanation why some of the expected impact did not materialise but also reveals unintended impacts and ‘pseudo-impact’.

In spite of good examples, in practice IEs with mixed methods approach are still scarce, and with most cases, the designs are strongly (quasi)experimental combined with a couple of in-depth interviews or focus groups discussions and are often lacking enough information for readers to follow and assess the logic and quality of analysis. If we want mixed methods designs to be rigorous and credible, shouldn’t there be some minimum standards on how methods and data are used, combined and reported?

Setting minimum standards is difficult but something we should aim for. The first step is to give more guidelines, recommendations and clear examples on how to design and conduct robust mixed methods designs. This InterAction guidance note focusing using mixed methods in IEs is excellent, but we need more rigorously conducted examples highlighting what methods are used, in which stage the methods and data are combined, what kind of evaluation questions they address and how the sampling is done.

Ultimately, not giving enough emphasis to reporting qualitative methods and methodology used seriously undermines the importance of insights gained from using those methods. When econometric techniques and sampling are being explained in detail, while focus group discussions are given only a few lines, it is no wonder that the results of the latter are not considered rigorous and valuable. Readers need more information to assess the quality of methodology used.

If we want to translate the rhetoric around the use and importance of mixed methods into practise, we need to pay more attention not only to how qualitative methods are being used but also how they are reported.

The first post in this series was Mixed methods in evaluation part 1: a warm up. The second was: Mixed methods in evaluation part 2: exploring the case of a mixed-method outcome evaluation.

Read more​

Combine qualitative and quantitative data

Using a combination of qualitative and quantitative data can improve an evaluation by ensuring that the limitations of one type of data are balanced by the strengths of another. This will ensure that understanding is improved by integrating different ways of knowing. Most evaluations will collect both quantitative data (numbers) and qualitative data (text, images), however it is important to plan in advance how these will be combined.

Read more

Bamberger, M. (2012) Introduction to mixed methods in impact evaluation, InterAction.

Available at: http://www.interaction.org/document/guidance-note-3-introduction-mixed-m...

Adato, M. (2012) Integrating Survey and Ethnographic Methods to Evaluate Conditional Cash Transfer Programs, IFPRI.

Available at: http://www.ifpri.org/sites/default/files/publications/ifpridp00810.pdf

[Image source: Pedro Couthino/Flickr]

A special thanks to this page's contributors
Author
Research Officer, Overseas Development Institute.
United Kingdom.

Comments

There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.