This week's 52 Weeks of BetterEvaluation post brings our series on the BetterEvaluation Rainbow Framework to an end, and presents the final AEA hosted webinar recording.
BetterEvaluation recently published a paper which presented some the confusion which can result when commissioners and evaluators don’t spend enough time establishing basic principles and understanding before beginning the evaluation.
I'm sure most of our readers will agree that the goal of evaluation is not the fulfilment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive
If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated.
BetterEvaluation recently published a new paper, ‘Two sides of the evaluation coin,’ exploring what can happen when miscommunication, changing leadership and misunderstanding disrupt the smooth running of an evaluation: and what can be done to minimise
While we work on the remaining blog posts on the recent AEA Coffee Break webinars, this week we're highlighting content and events recently suggested to us by users.
What's one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done.
Whether you are commissioning an evaluation, designing one or implementing one, having - and sharing - a very clear understanding of what is being evaluated is paramount.
How do we ensure we address all the important aspects of an evaluation when we're planning it? How do we manage to consider the different options without being overwhelmed?
Data analysis is sometimes the weak link in an evaluation plan. Answering key evaluation questions requires thoughtful analysis - and this needs appropriate tools.