Primary tabs

Show search hints
Did you mean
their of change

Search results

  1. Develop programme theory / theory of change

    Logic model, Program logic, Programme logic, Causal model, Results chain, Intervention logic, ToC

    A programme theory explains how an intervention (a project, a programme, a policy, a strategy) is understood to contribute to a chain of results that produce the intended or actual impacts. 

    It can include positive impacts (which are beneficial) and negative impacts (which are detrimental). It can also show the other factors which contribute to producing impacts, such as context and other projects and programmes.

    Different types of diagrams can be used to represent a programme theory.  These are often referred to as logic models, as they show the overall logic of how the intervention is understood to work.

  2. Rainbow Framework


    There are many different methods and processes that can be used in monitoring and evaluation (M&E). The Rainbow Framework organises these methods and processes in terms of the tasks that are often undertaken in M&E. The range of tasks are organised into seven colour-coded clusters that aim to make it easy for you to find what you need: Manage, Define, Frame, Describe, Understand Causes, Synthesise, and Report & Support Use.

    Also Available In: Español
  3. Determine What 'Success' Looks Like


    Evaluation is essentially about values, asking questions such as : What is good, better, best?  Have things improved or got worse? How can they be improved? Therefore, it is important for evaluations to be systematic and transparent in the values that are used to decide criteria and standards.

  4. Specify the Key Evaluation Questions


    Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer - not specific questions that are asked in an interview or a questionnaire. Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it.

  5. Visualise data

    Visualize Data, data viz, dataviz

    Data visualisation is the process of representing data graphically in order to identify trends and patterns that would otherwise be unclear or difficult to discern. Data visualisation serves two purposes: to bring clarity during analysis and to communicate.

  6. Investigate possible alternative explanations


    All impact evaluations should include some attention to identifying and (if possible) ruling out alternative explanations for the impacts that have been observed.

  7. Check the results are consistent with causal contribution


    One of the tasks involved in understanding causes is to check whether the observed results are consistent with a cause-effect relationship between the intervention and the observed impacts.

  8. Strengthen evaluation capacity

    Evaluation Capacity Building, Evaluation Capacity Development, Evaluation Capacity Strengthening, Evaluation Capability Development

    An important aspect of monitoring and evaluation (M&E) ‘systems’ is strengthening the M&E capacity of individuals, organisations, communities and networks.  While there are other terms used for this, we suggest using the term ‘evaluation capacity strengthening’ to emphasise the value of recognising, reinforcing and building on existing capacity. M&E capacity is not just about developing competencies for doing M&E.  It also includes competencies in effectively designing, managing, implementing and using M&E. It includes strengthening a culture of valuing evidence, valuing questioning, and valuing evaluative thinking. 

  9. Define


    This cluster of evaluation tasks develops an initial description of the program and how it is understood to work. This can be used to:

    • engage stakeholders in the task "understand and engage stakeholders" from the 'Manage' cluster of tasks
    • guide choices about what data to collect in the 'Describe' cluster of tasks
    • inform testing of causal links when planning how to 'Understand Causes'
  10. Report and support use


    From the first step of the evaluation process, even though it may be one of the last evaluation tasks, explicitly discuss the content, sharing, and use of reports during the initial planning of the evaluation and return to the discussion thereafter. Most importantly, identify who your primary intended users are. Use of the evaluation often depends on how well the report meets the needs and learning gaps of the primary intended users.