The BetterEvaluation Resource Library contains hundreds of curated and co-created resources related to managing, conducting, using, and strengthening capacity for evaluation.
You can use the search field and filtering on this page to find resources that you are interested in or you can browse our extensive list. An alternative way to find resources best suited to your needs is to explore the Rainbow Framework, where you can find resources relating to evaluation methods, approaches and tasks.
- 1147 results found
- X Evaluators
- X Other
Utilisation du cadre de référence Arc-en-ciel
Cette version compacte du cadre de Rainbow vous invite à réfléchir à une série de questions clés. Il est important de tenir compte de toutes ces questions, y compris les rapports, au début d'une évaluation.Cómo usar el Marco Arcoíris
Esta versión compacta del Marco Arcoíris le pide que piense en una serie de preguntas clave.استخدام إطار " قوس قُزح" لإجراء تقييم أفضل
إطار " قوس قُزح" لتقييم أفضل يدفعك إلى التفكير في سلسلة من الأسئلة الرئيسية والهامة. ومن المهم أن نأخذ في إعتبارنا كل هذه المواضيع منذ بداية التقييم بما في ذلك التقارير التي سوف نحصل عليها.Zum gebrauch des BetterEvaluation rainbow framework
BetterEvaluation Rainbow Framework regt zum Nachdenken über eine Reihe von Schlüsselfragen an.Using the BetterEvaluation Rainbow Framework
This compact version of the Rainbow Framework in English was created in 2014.Value for investment: A practical evaluation theory
This booklet proposes a model for evaluating value for investment in social programs by integrating economic and evaluative thinking. Julian King is a New Zealand public policy consultant.Intégrer des approches genre en évaluation
Cette infographie de École nationale d'administration publique (ENAP) sert de guide pratique pour intégrer les considérations de genre dans l'évaluation de diverses interventions.Net-Map toolbox
This website provides a detailed overview of the uses of the Net-Map tool which has been designed for influence mapping of social networks.Monitoring and accountability practices for remotely managed projects...
This report from Tearfund brings together a number of research findings examining the issue of remote project monitoring and beneficiary accountability.Using AI to disrupt business as usual for evaluation practitioners & firms
In this webinar, Intention 2 Impact, Inc. consultants Nina Sabarre, PhD and Blake Beckmann shared their perspectives about the potential, challenges, and opportunities of using AI for evaluation practice.National Evaluation Capacities (NEC) webinar: Engaging youth, addressing crisis, and building resilience (GEI and UNDP)
This National Evaluation Capacities (NEC) conference webinar explores the themes of youth engagement in national evaluation systems and the challenges associated with evaluating and strengthening these systems in crisis settings.Complexity Evaluation framework: recognising complexity & key considerations for complexity-appropriate evaluation in the department for environment, food and rural affairs (DEFRA)
The primary purpose of this framework is to equip Defra commissioners of evaluation (which may include analysts and policymakers), with a checklist of core considerations to ensure that evaluations are robust and sufficiently consider the iThe visual representation of complexity: Definitions, examples and learning points
This visual overview was developed through a research process that identified, defined and illustrated 16 key features of complex systems.Introduction to qualitative research methodology
This manual, written by Karina Kielmann, Fabian Cataldo and Janet Seeley, aims to give readers of a non-scientific background an introduction to key theoretical concepts and methodologies in qualitative research.Identifying the intended user(s) and use(s) of an evaluation
This guideline from the International Development Research Centre (IDRC) highlights the importance of identifying the primary intended user(s) and the intended use(s) of an evaluation.A short primer on innovative evaluation reporting
This book by Kylie Hutchinson presents a number of innovative ways of reporting, including different methods for presentations, narrative summaries, presenting findings visually and making use of digital outputs.Evaluación participativa
EvalParticipativa is the Community of Practice and Learning in Participatory Evaluation for Latin America and the Caribbean.Evaluación participativa
EvalParticipativa es la Comunidad de Práctica y Aprendizaje en Evaluación Participativa para América Latina y el Caribe.Participant produced video: Giving participants camcorders as a social research methods
This toolkit from Real Life Methods provides a guide to using participant-produced video to allow participants to record their everyday lives and reflect on those things that matter to them.People first impact method: Facilitator’s toolkit
This toolkit, developed by Gerry McCarthy and Paul O’Hagan, is aimed at supporting facilitators in the teaching of the People First Impact Method (P-FIM) through a range of exercises.New trends in development evaluation
This working paper from UNICEF aims to review the latest trends in evaluation to develop a new strategy for improving the evaluation function.Mapping change: Using a theory of change of guide planning and evaluation
This guide, written by Anne MacKinnon and Natasha Arnott for GrantCraft, describes the process of developing a theory of change to support planning and evaluation.Making evaluations matter: A practical guide for evaluators
This guide, written by Cecile Kusters with Simone van Vugt, Seerp Wigboldus, Bob Williams and Jim Woodhill for the Centre for Development Innovation, presents a framework for planning and managing an evaluation with a focus oChildren in crisis: Good practices in evaluating psychosocial programming
This guide from Save the Children aims to support staff in psychosocial program design and evaluation for projects involving children.Recommendations in evaluation
This presentation, given by Lori Wingate for the Evaluation Center, Western Michigan University, takes a look at the 'why, what and how' of making recommendations in an evaluation.A summary of the theory behind the LFA method
This paper from the Swedish International Development Cooperation Agency (Sida) has been designed to support staff in implementing the logical framework approach in project planning and design.A participatory model for evaluating social programs
This paper from the James Irvine Foundation outlines a participatory approach to evaluating social programs.A question of worth: Cost analysis in evaluation
This presentation from the World Health Organization analyses the different methods of cost analysis including cost-benefit analysis and cost-effectiveness analysis.Machine learning and meta-ethnography: Seven steps to synthesising 578 evaluations into four themes
This paper documents a case study using machine learning and meta-ethnography techniques to synthesise and draw lessons from 578 evaluations. This paper is part of the BetterEvaluation Innovation Working Paper series.Machine learning in evaluative synthesis: Lessons from private sector evaluation in the World Bank Group
An exploration of the potential to use machine learning techniques to enhance the efficiency of analyzing, classifying, and synthesizing extensive amounts of text in evaluation research.Advanced content analysis: Can artificial intelligence accelerate theory-driven complex program evaluation?
This paper presents the methodology and results of an assessment of the applicability and utility of artificial intelligence for advanced theory-based content analysis.Leveraging imagery data in evaluations: Applications of remote-sensing and streetscape imagery analysis
This paper discusses using imagery data in evaluations and the advantages and limitations of relevant methodologies.