This week’s blog is from Jonny Morell, editor of Evaluation and Program Planning and author of Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. He blogs at http://evaluationuncertainty.com/.
Unfortunately I believe so. Last year I met a group of Brazilian evaluators in a conference, and learned from them the growing demand for good evaluation studies in Brazil, but also the need for more capacity building initiatives in this area, besides t
Um dos desafios em trabalhar em avaliação é que importante termos (como "avaliação", "impacto", "indicadores", "monitoramento" e assim por diante) são definidos e usados de maneiras muito diferentes, por pessoas diferentes.
What is more important to you: a good education or a good healthcare system? Or perhaps employment or security is at the forefront of your mind at the moment. What about the environment or human rights?
Simon Hearn continues BetterEvaluation’s theme on the monitoring and evaluation of policy change by suggesting a set of measures to help those struggling to monitor the slippery area of policy influence and advocacy.
One of the challenges of working in evaluation is that important terms (like ‘evaluation’, ‘impact’, ‘indicators’, ‘monitoring’ and so on ) are defined and used in very different ways by different people.
We’ve talked before on this blog about evaluating advocacy interventions. One of the hottest debates is how and to what extent it is possible to establish causation in advocacy programmes.
Continuing our season of blogs on presenting evaluation findings in ways that will get them read (and hopefully used), Joitske Hulsebosch, an independent consultant, contributes her ideas on how to present your findings in the form of an infographic.