Adapting evaluation in the time of COVID-19 — Part 3: Frame

Blog%20Image%20template%20-%20Adaptive%20Evaluation.png

Evaluation needs to respond to the changes brought about by the Covid-19 pandemic.  As well as direct implications for the logistics of collecting data and managing evaluation processes, the pandemic has led to rapid changes in what organisations are trying to do and how evaluation can best be used to support these changes.   

In the third of our Adapting Evaluation in the time of COVID-19 blog series, we are focusing on the third cluster of tasks in the Rainbow Framework – FRAME the boundaries of an evaluation - identify the primary intended users, decide purposes (primary intended uses), specify the key evaluation questions, and determine what 'success' looks like.  We've brought together some useful advice and resources for addressing these tasks. 

1. Identify primary intended users

Clarify who will actually use the evaluation—not in vague, general terms (e.g. "decision-makers") but in terms of specific identifiable people (e.g. the manager and staff of the programme; the steering committee; funders deciding whether to fund this programme or similar programmes in the future). View full 'identify primary intended users' page in the Rainbow Framework.

While your primary intended users may remain the same, it’s worth reviewing this in light of potentially new organisational circumstances. Some things to look at include:

  • Have there been any major staffing changes or restructuring? If some of your primary intended users are no longer in the same role or have left, will there be a replacement who you should identify as a new intended user? If so, What orientation will you need to give your new intended users? How will you help them to feel ownership of and buy-in to the evaluation? This can often be helped with involving intended users with the decision-making

  • Have the primary intended users fundamentally changed?  In some cases there is now increasing attention to prioritising local implementers and local communities as primary intended users of evaluation to inform locally appropriate adaptations. 

Resources

Utilization-Focused Evaluation Checklist: Step 3 of this checklist contains some useful advice about identifying, monitoring engagement and facilitating change of primary intended users.

Utilization-Focused Evaluation: A primer – goes into more detail about determining the primary intended users (PIUs) of an evaluation. It offers some desired attributes of PIUs, and advice for engagement with PIUs throughout the evaluation in order to strengthen their sense of ownership. 

One Way Evaluation Can Center Affected Communities: Put Nonprofits in the Driver's Seat In this blog Nancy Latham provides some examples of how evaluators can centre the needs of nonprofits and the communities they serve. 

2. Decide purposes

Clarify the intended uses of this evaluation—is it to support improvement, for accountability, for knowledge building? Is there a specific timeframe required (for example, to inform a specific decision or funding allocations)? If there are multiple purposes, decide how you will balance these. View full 'decide purpose' page in the Rainbow Framework.

Working with your primary intended users, it is important to review the primary intended uses of the evaluation. Some questions to get you started with this are:

  • What were the initial intended purposes of the evaluation?
  • Is the prioritisation of these purposes still the same? What was once a priority before COVID-19 may have been bumped for a new emergent priority as the landscape has shifted. Are the priorities of your primary intended users still the same? For example, do they still have the same amount of time and headspace to dedicate to engaging with the different stages of the evaluation?
  • Are there any new uses for the evaluation that have emerged in the current circumstances? In the changed circumstances there might be more emphasis on short-run, local learning rather than longer cycles of policy-level learning, and more attention to addressing new accountability requirements. 

Resources

Identifying the Intended User(s) and Use(s) of an Evaluation: This short guide by IDRC on Utilization-focused evaluation has some useful facilitation questions on identifying the intended uses of an evaluation.

In addition, the two Utilisation-Focused Evaluation resources listed in the above section are also going to be useful for this task.

Public integrity for an effective COVID-19 response and recovery: This OECD paper discusses issues around the purposes of evaluation in the time of COVID-19. It argues that there will likely be an increased need for evaluation to contribute to effective accountability for expenditure and changes to processes, policies, regulations and laws, stating that:

“The COVID-19 crisis is obliging governments to make quick decisions and implement drastic measures to protect communities at risk and limit the economic consequences that will follow. Past crises have shown that emergencies and subsequent rapid responses create opportunities for integrity violations, most notably fraud and corruption, seriously weakening the effectiveness of government action.”

What will it take to learn fast to save lives during COVID-19 (coronavirus)? In this blog Lauren Kelly and Christopher Nelson discuss ways that evaluation can better support nimble learning and adaptation, including using options such as framing questions, emergent learning tables, before action review and after action review.

Evaluation Implications of the Coronavirus Global Health Pandemic Emergency: This Blue Marble blog written by Michael Quinn Patton in March touches on a number of really important issues around how evaluation will need to change in response to COVID-19. One of the key points that we took out of this piece is that at this point in time, there is an ethical imperative for evaluations to focus on use:

Make it about use not about you. The job of the people you work with is not to comfort you or help you as an evaluator. Your job is to help them, to let them know that you are prepared to be agile and responsive, and you do so by adapting your evaluation to these changed conditions. This may include pushing to keep evaluations from being neglected or abandoned by showing the ongoing relevance of evaluative thinking and findings – which means adapting to ensure the ongoing relevance and utility of evaluative thinking and findings.  For example, in an international project with many field locations, instead of continuing to administer a routine quartering monitoring survey, to be more useful we’ve created a short open-ended survey about how people are being affected and adapting at the local level, and what they need now from headquarters.”

3. Specify the key evaluation questions (KEQs)

Articulate a small number of broad evaluation questions that the evaluation will be designed to answer. These are different to the specific questions you might ask in an interview or a questionnaire. View full 'specify the key evaluation questions' page in the Rainbow Framework.

The evaluation questions determine the scope of your evaluation. While this is often seen as fixed once they are determined at the beginning of an evaluation, the current crisis warrants a revisit of these. In addition to ensuring that your KEQs are still relevant to the intended uses of the evaluation (see above), these should be reviewed to ensure your evaluation is not directly or indirectly causing harm. In particular, we suggest:

  • Consider limiting your key evaluation questions to the ‘must have’ questions

Lists of KEQs often include some ‘must have’ and some ‘nice to have’ questions. At this present point in time, people around the world are dealing with the COVID-19 crisis and the effects that this has had on themselves, their loved ones, their work and their personal lives. There is an ethical question you will need to ask about whether the ‘nice to have’ questions are important to collect data on at this present point. This is particularly so if your data collection cannot be done remotely, but even if all of your data is collected virtually or by telephone, you are still adding to the reporting burden of your stakeholders at a very difficult point in time. It’s important to ask whether this data is worth the cost.

Resources

Real-time Evaluations of Humanitarian Action - an ALNAP Guide. Pilot version: Real-time evaluations are a good point of reference for evaluation in times of COVID-19, as they are designed to gather information quickly that can be of immediate use to decision-making. ALNAP's guide makes the excellent point that, "As with all research, there is a distinction between knowledge that is interesting and knowledge that has utility in terms of the evaluation purpose." The guide includes a flowchart on page 17 designed to help readers narrow the list of KEQs.

  • Include questions of equity in your ‘must have’ list

The impact of a crisis is not universal across different people. Marginalised populations are likely to be disproportionately impacted and its important to recognise and investigate these different impacts so that policies and programmes can be designed or revised appropriately.

See also the below point about working with community members to design KEQs.

SOPHIE: Evaluating the impact of structural policies on health inequalities and their social determinants and fostering change Incorporating Intersectionality in Evaluation of Policy Impacts on Health Equity A quick guide: This guide introduces the central concepts of intersectionality theory and presents a number of questions to consider in an intersectionality-based policy analysis. It also includes a number of examples of how one-dimensional analyses of population health and health inequalities can mask evidence of true health effects.

UNFPA: Adapting evaluations to the COVID-19 Pandemic: This document contains principles for its regional and country offices. It includes the principle: “Crisis affects different people in different ways, impacting disadvantaged populations disproportionately. Human rights-based, equity-focused and gender-responsive evaluations are more important than ever, to inform interventions focused on leaving no one behind” 

  • Involve those who will be affected by the results of the evaluation in the evaluation - including at the decision-making level and when developing or revising KEQs.

This includes being open to including KEQs that are not aligned with funder priorities, but which are of importance to local communities.

Resources

How can Covid-19 be the catalyst to decolonise development research? Carmen Leon-Himmelstine and Melanie Pinet’s blog post in Poverty to Power discusses the issues of power within North and South research and evaluation collaborations. They bring up a number of important points, but specifically related to what gets researched, they write:

“While research on Covid-19 is essential to understand and provide solutions to its devastating impact, including a sharp increase in poverty, researchers in the Global South have raised concerns about deprioritising other existing research priorities such as Malaria, Sexual and Reproductive Health, or HIV/AIDS. Researchers and donors need to listen carefully and support the research agenda that local researchers consider essential in their respective countries, whether it is Covid-19 related or not.”

Evaluation practice in Aboriginal and Torres Strait Islander Settings (Australia) - specifically, the Ethical Protocol contains a number of principles to guide practitioners working within Aboriginal and Torres Strait Islander settings in Australia, and advice on how to apply these principles. There are six themes within this protocol - we encourage you to look at them all as they are equally important and cover a wide range of important principles. Related to deciding on KEQs, we wanted to highlight the following point on how to apply the Empowerment Principle within the theme Prioritise self-determination, community agency and self-governance:

"Seek what is important and what needs to be evaluated from the community – take a ground up perspective to understand community priorities. Even top down projects should include community-led evaluation, together with what funders want to know."

4. Determine what 'success' looks like

Clarify the values that will be used in the evaluation in terms of criteria (aspects of performance) and standards (levels of performance). These will be used together with evidence to make judgments about whether or not an intervention has been successful, or has improved, or is the best option. Decide how different stakeholders’ values will be identified and negotiated. View full 'determine what success looks like' page in the Rainbow Framework.

  • How might the standards and criteria used in the evaluation need to change? Are there new domains of performance that have become important?  Is it still reasonable to expect the same standards of performance - for example meeting agreed targets? Are there agreed standards and criteria available or will these need to be emergent? Whose voices will count in terms of what success looks like?  
  • How will you deal with trade-offs and negotiate between different values? Success cannot always be summarised by simply adding up performance across different domains.  Are there any non-negotiable domains that must be satisfactory?  Are there some areas where trade-offs will be needed rather than optimising one at the expense of another? 

Resources

What will it take to learn fast to save lives during COVID-19 (coronavirus)? In their blog on emergent learning, Lauren Kelly and Christopher Nelson discuss two tools which seem particularly relevant for this task:

“The ‘Framing Question' is built with the simple phase: “What will it take to”….[deliver healthcare to the most vulnerable COVID-19 patients?]. It creates a focus for collective learning by asking what it will take to achieve a desired outcome. It is a forward-focused, action-oriented challenge to the group to ensure alignment around an agreed premise or idea. When done well, the framing question is at once a mechanism to coalesce around a desired outcome and a means to define individual contributions. A good framing question has the ability to "train a group’s attention forward, in a collective inquiry that leads to action”.

‘Before Action Review’ is an opportunity to discuss in detail what success will look like, and to establish intended results and identify anticipated pitfalls. The review sets the team on a learning pathway—linked to iterative, and often held, 'After Action Review’.”

We’d suggest that while these are useful tools when planning a program, they will also be useful when used iteratively at each critical decision point when managing adaptively.​

Transforming Evaluations blog series: In part 3 of the Transforming Evaluations blog series, Zenda Offir has synthesised a number of voices writing about how evaluation must adapt to deal with the pandemic into list of Actions for evaluators in and beyond the time of COVID-19. We’d recommend you read the full list (and we’ll draw on other bits of it in other blogs in this series), but related to the task of defining success and evaluation standards are the following actions:

- Take responsibility for helping to ensure credible evidence and ‘truth’
- Respect diverse value systems, worldviews, narratives, models and frameworks
- Deal with increasing need for trade-offs, incl. in terms of ‘no-one left behind’”
- Rethink evaluation questions, criteria and competencies.
 
Part 5 of the Transforming Evaluations blog series goes into some more detail about these actions.
 

Over to you

Have you had to change the framing of an evaluation or your organisation's approach to framing evaluations?  Do you have new priorities in terms of intended users or uses?  Please share your experiences below and resources which you've found useful to pivot evaluations in the comments below or by contributing content.

'Adapting evaluation in the time of COVID-19 — Part 3: Frame' is referenced in: