Search

3447 results

Filter search results
  • Iterative evaluation design

    An iterative evaluation design involves setting out an initial overall evaluation design or process at the beginning of the evaluation.
    Método
  • Single evaluation approach design

    The evaluation design is based on selecting a single existing evaluation model or approach and using it for an evaluation.
    Método
  • Evaluator-led evaluation design

    An evaluation team develops an evaluation design in response to an evaluation brief which sets out the purposes of the evaluation.
    Método
  • Community-led evaluation design

    A community develops an evaluation design, sometimes with facilitation or technical support.
    Método
  • Develop a design for the evaluation

    An evaluation design sets out how data will be collected and analysed in terms of the methods used and the research design.
    Marcos/Guías
    Rainbow Framework
  • What counts as good evidence?

    This paper, written by Sandra Nutley, Alison Powell and Huw Davies for the Alliance for Useful Evidence, discusses the risks of using a hierarchy of evidence and suggests an alternative in which more complex matrix approaches for identifyin
    Recurso
  • Qualitative comparative analysis

    Qualitative Comparative Analysis (QCA) is an evaluation approach that supports causal reasoning by examining how different conditions contribute to an outcome.
    Enfoque
  • Review evaluation quality

    Evaluating the quality of an evaluation can be done before it begins (reviewing the plan) or during or after the evaluation (reviewing the evaluation products or processes). This is sometimes called a quality review or meta-evaluation.
    Marcos/Guías
    Rainbow Framework
  • Common good and equity

    Consideration of common good and equity involves an evaluation going beyond using only the values of evaluation stakeholders to develop an evaluative framework to also consider common good and equity more broadly.
    Método
  • Impartiality

    Impartiality in evaluation refers to conducting an evaluation without bias or favouritism, treating all aspects and stakeholders fairly. Key aspects of impartiality in evaluation can include:
    Método
  • Identifying evaluative criteria

    This chapter details the use of a needs assessment to identify evaluative criteria.
    Recurso
  • 20 years of outcome mapping: Evolving practices for transformative change

    This paper reflects on the evolving use of Outcome Mapping 20 years after the first publication on this approach. The resource also provides a "set of guiding practices to support transformative change":
    Recurso
  • A guiding framework for needs assessment evaluations to embed digital platforms in partnership with Indigenous communities

    This open-access journal article describes a needs assessment with a subarctic Métis community in Saskatchewan, Canada.
    Recurso
  • Outcome mapping + equity, gender, and social justice

    This paper introduces OM+, a new approach to thinking about and using Outcome Mapping (OM) for supporting transformative change through a focus on inclusion and leadership for equity, gender, and social justice.
    Recurso
  • Understand the situation

    A situation analysis examines the current situation and the factors contributing to it. This might include identification and analysis of needs, resources, strengths, weaknesses, opportunities, threats, and/or power analysis.
    Marcos/Guías
    Rainbow Framework
  • Power analysis: A practical guide

    This guide was developed in response to a recommendation of the Swedish Government Policy on Democratic Development and Human Rights, that power be analysed as part of context-specific poverty analysis.
    Recurso
  • Quick guide to power analysis

    This 2-page guide developed by Oxfam in 2014 and updated in 2021, provides an overview of key concepts in power analysis and why it is useful, with links to additional resources.
    Recurso
  • Process tracing as a practical evaluation method: Comparative learning from six evaluations

    This 2020 paper by Alix Wadeson, Bernardo Monzani and Tom Aston presents and reflects on six evaluations where process tracing was used and identifies some key learnings.
    Recurso
  • Power analysis briefing: Review of tools and methods

    This paper, developed for WaterAid in 2012, provides an overview of different types of tools and methods for power analysis, including the use of lists, network maps, and classification systems.
    Recurso
  • Example outcome journal template

    This template is based on the original Outcome Mapping guidance, incorporating elements from Outcome Harvesting (e.g. significance) and allowing tagging to particular progress markers (rather than listing them all as the original does).
    Recurso
  • Causal map app

    This site includes a range of resources on causal mapping and the use of the causal map app including a guide that covers basic and advanced coding and analysis.
    Recurso
  • Reflecting: journals and learning (b)logs – Using journals and learning (b)logs to assess learning

    The "Reflecting: Journals and Learning (b)logs" resource from the University of Warwick provides comprehensive guidance on the use of reflective writing for assessment and personal development.
    Recurso
  • Diaries and logs

    This article provides comprehensive guidance on using diaries and logs to assess physical activity.
    Recurso
  • Key informant attribution

    A method for testing causal reasoning by asking key informants.
    Método
  • The art and craft of bricolage in evaluation

    This CDI Practice Paper, by Tom Aston and Marina Apgar, makes the case for ‘bricolage’ in complexity-aware and qualitative evaluation methods.
    Recurso
  • Decide who will conduct the evaluation

    Clarify who will actually undertake the evaluation. This might include people who are involved in what is being evaluated (such as implementers, clients and community members), an internal or external evaluator, or some combination of these.
    Marcos/Guías
    Rainbow Framework
  • Identify potential unintended results

    It is useful and ethical to consider possible negative impacts (that make things worse not better) and how they can be identified before an intervention (project, programme, or policy) is implemented and addressed in an evaluation or M&E System.
    Marcos/Guías
    Rainbow Framework
  • Develop theory of change / programme theory

    A programme theory or theory of change (TOC) explains how an intervention (a project, a programme, a policy, a strategy) is understood to contribute to a chain of results that produce the intended or actual impacts.
    Marcos/Guías
    Rainbow Framework
  • Lost causal: Debunking myths about causal analysis in philanthropy

    This 2021 paper, updated in 2024, advocates for more causal analysis in philanthropic evaluation - not just describing actions taken and changes observed, but also learning how and why change occurred.
    Recurso
  • Rethinking rigour to embrace complexity in peacebuilding evaluation

    This 2024 open-access journal article presents the inclusive rigour framework and applies it to three cases of peace-building evaluation.
    Recurso
  • Causal Pathways 2023 Symposium and 2024 introductory sessions

    This series of webinars was first presented at the Causal Pathways Symposium 2023, which focused on "connecting, learning, and building a shared understanding of the evaluation and participatory practices that make causal pathways more visible"
    Recurso
  • Causal Pathways introductory session: How do I mix and combine methods?

    This session of the Causal Pathways Symposium 2023 explores when to use different methods and how to combine them to better understand and visualise causal pathways.
    Recurso