Consider important elements of what is being evaluated
What is being evaluated makes a difference to how it should be evaluated. It is helpful to identify particular aspects of what is being evaluated and check that these have been addressed in the evaluation design.
1. Check the stage of development of the project or program
Firstly, check the implications of the stage of development of the project or program that is being evaluated. Is it still being planned? Is it part–way through implementation? Or is it near the end – or has it in fact already ended?
Stage of development |
Consequence |
Possible implication for the evaluation design |
---|---|---|
Not yet started |
Can set up data collection from the beginning of implementation |
Possible to gather baseline data as a point of comparison and also to establish comparison groups or control groups from the beginning |
Opportunity to build some data collection into administrative systems to reduce costs and increase coverage |
||
Period of data collection will be long |
Need to develop robust data collection systems including quality control and storage |
|
Part way through implementation |
Cannot get baseline data unless this has already been set up |
Will need to construct retrospective baseline data to estimate changes that have occurred |
Might be able to identify “bright spots” where there seems to be more success and those with less success |
Scope to do purposeful sampling and learn from particular successes and also cases which have failed to make much progress |
|
Almost completed |
Cannot get baseline data unless this has already been set up |
Will need to construct retrospective baseline data to estimate changes that have occurred |
Depending on timeframes, some outcomes and impacts might already be evident |
Opportunity to gather evidence of outcomes and impacts |
|
Completed |
Cannot get baseline data unless this has already been set up |
Will need to construct retrospective baseline data to estimate changes that have occurred |
Depending on timeframes, some outcomes and impacts might already be evident |
Opportunity to gather evidence of outcomes and impacts |
|
Cannot directly observe implementation |
Will need to depend on existing data or retrospective recollections about implementation. |
2. Is it complex or complicated?
Secondly, consider whether there are important aspects that are either complicated (with many components) or complex (emergent) that should be addressed in the evaluation design.
(i) Focus
Does everyone share the same objectives?
Homogeneity of objectives |
Implications |
---|---|
Everyone shares a single set of objectives |
Impacts to be included can be readily identified from the beginning. |
There are different objectives valued by different stakeholders (competing objectives, different objectives at different levels) |
Need to identify and gather evidence about multiple possible changes |
Need an agreed way to weight or synthesise results across different domains to produce a judgement of overall performance. | |
The stated objectives are changing (often in response to changing needs or opportunities) |
Need nimble impact evaluation systems that can gather adequate evidence of emergent intermediate outcomes or impacts |
(ii) Management
Who has responsibility for management and decision making?
Who is responsible |
Implications |
---|---|
Single organisation |
Primary intended users and uses easy to identify and address in the development of Key Evaluation Questions and endorsement of the design |
Multiple organisations (which can be identified) with specific, formalized responsibilities |
Likely to need to negotiate access to data and ways to link and co-ordinate data |
Might need to negotiate parameters of a joint impact evaluation, including negotiating scope and focus. | |
Changing list of organizations working together in flexible ways |
Need nimble impact evaluation systems that can gather evidence about the contributions of emergent actors and respond to the different ways they value intended and unintended impacts |
(iii) Consistency
How much variability is there in how activities are implemented?
Level of variability |
Implications |
---|---|
Standardized – one-size-fits-all program |
Quality of implementation should be investigated in terms of compliance with ‘best practice’. |
Adapted – variations of a programme planned in advance and matched to pre-identified contextual factors. |
Quality of implementation should be investigated in terms of compliance with the practices prescribed for that type of situation. |
Adaptive – evolving and personalised program that responds to specific and changing needs. |
Quality of implementation should be investigated in terms of how responsive and adaptive service delivery was. |
(iv) Necessity
How many different options are there for solving the problem or producing the intended impacts? To what extent is this exact initiative needed to solve the problem?
Number of possible interventions |
Implications |
---|---|
There is only one way to achieve the intended impacts. |
Counterfactual reasoning appropriate. |
The intervention is one of several ways of achieving the impacts, and the options can be identified. |
Counterfactual reasoning not appropriate as it does not accept a causal relationship between the intervention and the impacts unless they would not have occurred in the absence of the intervention. |
Possibly one of several ways of achieving the intended impacts (uncertain). |
Counterfactual reasoning not appropriate as it does not accept a causal relationship between the intervention and the impacts unless they would not have occurred in the absence of the intervention. |
(v) Sufficiency
To what extent will the problem be solved by the intervention alone?
Generalisability of the intervention |
Implications |
---|---|
The intervention is enough to produce the intended impacts. Works the same for everyone. |
Counterfactual reasoning appropriate |
Reasonable to ask ‘Does it work?’ | |
Works only in specific contexts which can be identified (eg implementation environments, participant characteristics, support from other interventions). |
Impact evaluation question needs to be ‘For whom, in what circumstances and how does it work?’ |
Counterfactual reasoning only appropriate if the causal package of supportive context and other activities can be identified and included. | |
Works only in specific contexts which are not understood and/or not stable. |
Impact evaluation question needs to be ‘For whom, in what circumstances and how does it work?’ |
Counterfactual reasoning not appropriate as the causal package of supportive context and other activities is changing and/or poorly understood and cannot be adequately identified. |
Change trajectory
How are the impact variables expected to change over time? For example, straight line of increase, or J curve? To what extent are the relationships between variables understandable and predictable?
Relationship between variables |
Implications |
---|---|
Simple relationship (cause and effect). Predictable. |
Measurement of change can be done at a convenient time and confidently extrapolated |
Complicated relationship that needs expertise to understand and predict. |
Timing of the measurement of changes should be undertaken when it will be most meaningful – expert advice will be needed. |
Emergent factors and multiple causes, sudden changes (tipping points) that are unpredictable. Can only be understood in retrospect. |
Changes will need to be measured at multiple times as the change trajectory cannot be predicted. |
Unintended impacts
To what extent are unintended impacts predictable?
Predictability of unintended impacts |
Implications |
---|---|
Easily predictable and therefore can be readily included in the data collection plans |
Need to draw on previous research and common sense to identify potential unintended impacts and gather data about them |
Need expertise to predict and address. |
Need advice from experts about potential unintended impacts and how these might be identified. |
Unpredictable - only identified and addressed when they occur. |
Need to include a wide net of data collection that will catch evidence of unexpected and unanticipated unintended impacts. |
Source: Resource Hub for Evaluating C4D 2016 - adapted from Funnell and Rogers (2011), pp.90-91, Rogers 2016.
3. Identify issues to be addressed
Are any of the following issues present? They will need to be addressed in the design.
Issue |
Possible implications for the evaluation design |
---|---|
Long time until impacts will be evident |
Might need to gather data about intermediate outcomes (that will be evident during the timeframe of the evaluation) and use other research and evaluation evidence to predict the likely achievement of impacts |
Difficulty observing implementation activities (eg conflict affected or remote areas) |
Might need to gather data through remote sensing, key informants, big data or crowdsourcing |
Difficulty observing results (outcomes, impacts) (eg sensitive issues, private behaviour) |
Might need to gather data through key informant interviews, or unobtrusive measures (for example looking at patterns of wear from foot traffic) or techniques for gathering sensitive data (for example polling booth) |
'Consider important elements of what is being evaluated' is referenced in:
الأُطُر/الأدلة
- Rainbow Framework :
- Manager's guide to evaluation :