52 Weeks of BetterEvaluation

During 2013 we're celebrating 52 weeks of BetterEvaluation. Each week we're featuring a particular method, option, task, tool, issue or event. There are links to resources, advice on choosing methods and using them well, and discussions of hot issues. 

52 Weeks of 2014 has begun! Find them here.

December

Week 52

Holiday gifts for everyone

For many of us, this is a time for reflecting back on the year and exchanging gifts.  So for this final post in the 52 weeks series for 2013, we wanted to share some evaluation gifts you can keep for yourself or share with your friends and colleagues.

 

Week 51

 

Strategies for commissioning evaluations successfully 

Earlier this year I co-taught a course on Evaluation for Public Sector Managers with the Australia and New Zealand School of Government (ANZSOG), using BetterEvaluation to structure the course and as a key resource. In one of the sessions, we generated with participants useful strategies, processes or tasks for managing an internal or external evaluation. The results provide a useful checklist for anyone commissioning an evaluation in the public sector, but is also useful for those working in other sectors.
 

Week 50

Start Here

A while back, Nick Herft from our team, invited BetterEvaluation users’ input on ‘What should we include in Start Here’. We received some great suggestions and promised to use these in reworking the Start Here section. Today, we highlight our initial attempt and ask for feedback and further suggestions.

 

Week 49

Evaluation e-learning in Spanish and Russian

While most of the evaluation resources on the BetterEvaluation site are in English, we're keen to provide access to resources in other languages.  In 2014, making the site more accessible in different languages will be one of our priorities.

Remember, you can get a rough translation of any page on the site using the Google Translate button.

November

Week 48

What does COP19 mean for monitoring and evaluation professionals?

For the past few weeks the international climate change community, from national negotiators to NGOs and campaigners, has gathered at Warsaw for the 19th ‘conference of the parties’ (COP), hosted by the UNFCCC. Dennis Bours is team leader of SEA Change community of practice, which focuses on monitoring and evaluating climate change interventions in Asia and beyond.

Week 47

Using video to communicate evaluation findings

In the last in our series of blogs on using video in evaluation, Glenn O'Neil joins us to discuss how you can use video to communicate your evaluation findings.

 

 

Week 46

An ethnography of evaluation - learning about evaluation from the inside using video

Conveying the complexities of the evaluation process isn’t easy, but video is one way to tackle the challenge. Quimera is a film company which was contracted to record the process of evaluating the USAID Growth with Equity in Mindanao III (GEM 3) project in the Philippines.

 

Week 45​

Participatory video for M&E - unpacking how change happened

It is time for another blog series on BetterEvaluation, and this time we will be exploring the uses of video in evaluation. Video is a powerful tool which can be used in many different ways and BetterEvaluation has only scratched the surface so far. Three experts will present three different uses of video in evaluation. In the first blog, Soledad Muniz, Head of Participatory Video for Monitoring and Evaluation Programme at InsightShare, describes the power of Participatory Video as a tool to engage communities and stakeholders in evaluation, and collect data from the perspective of beneficiaries. Next week we'll hear from Paul Bearse from Quimera on documenting evaluations with video for use as a learning tool for evaluators and commissioners. 

Week 44

How can monitoring data support impact evaluations?

Maren Duvendack and Tiina Pasanen explore the issue of using monitoring data in impact evaluations. Maren and Tiina work on the Methods Lab, a programme aiming to develop and test flexible and affordable approaches to impact evaluation. In this blog they discuss some problems with using monitoring data for impact evaluation, and suggest some solutions.

 

October

Week 43​

Celebrating one year of BetterEvaluation

This week we're celebrating the first year of BetterEvaluation since it went live to the public in October 2012. Thank you to everyone who has contributed material, reviewed content, developed the website, and participated in live events.

 

Week 42​

Using value for money in evaluation - a conversation

"Value for Money" is a term that is increasingly used in evaluation - often to mean very different things.  This week we're delighted to launch a new paper on Value for Money, written by Farida Fleming, an evaluator from Assai Consult and PhD Candidate at RMIT University, and developed by a Working Group convened through the Australasian Evaluation Society. In this week's blog post, Farida discusses Value for Money with Julian King a Public Policy Consultant from Julian King and Associates Ltd. Farida and Julian have agreed to serve as stewards for the pages on the BetterEvaluation site associated with Value for Money and invite your responses.

Week 41

Recommended content from the BetterEvaluation community

You'll find hundreds of evaluation resources on the BetterEvaluation site. Some have come from recommendations by stewards. Some have come from our writeshop project or design clinics.  And there are great resources that have been recommended by BetterEvaluation users. This week we are highlighting some of these user-recommended resources, how you can find the latest new material, and how you tell us your recommendations. 

Week 40

How to find evidence and use it well

When reviewing evidence for decision-making, the first challenge is deciding how to choose which types of evidence to include. In this blog, Jessica Hagen-Zanker from the Overseas Development Institute introduces a new approach to literature reviews which combine the rigour of full systematic reviews, without their disadvantages of resource intensiveness and inflexibility.  

 

September

Week 39​

Professional development opportunities

If you are looking to learn more and hone your skills in evaluation there has never been a better time to tap into any number of fantastic free or low cost professional development opportunities.

While many are presented face-to face at conference workshops and training sessions, you can often access classes online after the event if you miss the live workshop/webinar.

This week we want to bring to your attention a few of the face-to face workshops, webinars and online courses BetterEvaluation and its partners have to offer. First up, let’s look at some of the free, on-line professional development opportunities out there!

Week 38​

Ubuntu in evaluation

The South Africa Monitoring and Evaluation Association conference starts on Wednesday in Johannesburg this week, with the theme ‘Improving use and results’. On Thursday, the programme includes a session called ‘Made in Africa: evaluation for development’, exploring values and diversity in development evaluation. To kick off discussion, we asked Benita Williams, an evaluator from Pretoria, South Africa, about how her values affect her evaluation work. 

Week 37

Collaborative Outcomes Reporting

Collaborative Outcomes Reporting (COR) is an approach to impact evaluation that combines elements of several rigorous non-experimental methods and strategies.  You’ll find it on the Approaches page on the BetterEvaluation site - an approach combines several options to address a number of evaluation tasks. This week we talk to Jess Dart, who developed COR. Jess is the new steward for BetterEvaluation’s COR page and, together with Megan Roberts from Clear Horizon, has provided a step-by-step guide, advice on choosing and using the approach well, and examples of its use.

Week 36​

Supporting appropriate participation in evaluations

This week BetterEvaluation is at the Australasian Evaluation Society conference in Brisbane, Australia, where the theme is "Evaluation shaping a better future: Priorities, pragmatics, priorities and power".

 

August

Week 35​

Social return on investment in evaluation

In this week’s blog we interview Wouter Rijneveld, a consultant working on measurement and utilisation of results, mainly in international development. He recently published a paper on the use of the Social Return on Investment approach in Malawi and we wanted to find out about his experience of using this less-reported approach. We were doubly interested when he told us that he was initially skeptical about SROI.

Week 34​

Generalisations from case studies?

An evaluation usually involves some level of generalising of the findings to other times, places or groups of people. If an intervention is found to be working well then we could generalise to say that it will continue to work well, or it will work well in another community, or when expanded to wider populations. But how far can we generalise from one or more case studies? And how do we go about constructing a valid generalisation? In this blog, Rick Davies explores a number of different types of generalisation and some of the options for developing valid generalisations.

Week 33

Monitoring policy influence part 2 - like measuring smoke?

In the second part of our mini-series on monitoring and evaluating policy influence, Arnaldo Pellini, Research Fellow at the Overseas Development Institute, explores a project supporting research centres in Australia to monitor their impact on health policy in Southeast Asia and the Pacific. Arnaldo explores the main challenges and makes some recommendations for others looking at the M&E of policy influence. Read part one of the mini-series here

Week 32

Monitoring and evaluating policy influence and advocacy (Part 1)

This two part mini-series looks at monitoring and evaluation of policy influencing and advocacy. This blog introduces a great new paper from Oxfam America exploring this topic from an NGO perspective and the second blog will present the perspective of a research programme. 

 

July

Week 31

A series on mixed methods in evaluation

This week we are focusing on mixed methods in evaluation. We'll have two further blogs on the subject, one exploring an evaluation that used mixed methods and the other asking whether we are clear enough about what mixed methods really means - there are many evaluations out there claiming to be mixed methods when all they do is supplement a qualitative survey with interview data. 

Week 30

Manage an evaluation or evaluation system

This week's 52 Weeks of BetterEvaluation post brings our series on the BetterEvaluation Rainbow Framework to an end, and presents the final AEA hosted webinar recording. Over the series we've introduced the seven clusters of evaluation tasks and many of the options available. You can find a list of all eight posts in the series below.

 

Week 29

Weighing the data for an overall evaluative judgement

How do you balance the different dimensions of an evaluation?

Is a new school improvement program a success if it does a better job of teaching mathematics but a worse job of language?  Is it a success if it works better for most students but leads to a higher rate of school drop out?  What if the drop out rate has increased for the most disadvantaged?  And what about the costs of the program?  Is it a success if the program gets better results but costs more?

Week 28

Framing an evaluation: the importance of asking the right questions

BetterEvaluation recently published a paper which presented some the confusion which can result when commissioners and evaluators don’t spend enough time establishing basic principles and understanding before beginning the evaluation. This blog, from Mathias Kjaer of Social Impact (SI), uses a recent evaluation experience in Philippines to present some tips on how to choose the right questions to frame an evaluation.

Week 27

How can evaluation make a difference?

I’m sure most of our readers will agree that the goal of evaluation is not the fulfillment of a contract to undertake a study but the improvement in social and environmental conditions: evaluators really do want to see their evaluations used for positive, productive purposes. In these days of information overload it is not enough, then, to expect that a published evaluation report will be a sufficient strategy to inform or influence these improvements.

June

Week 26​

Understanding Causes

If you are doing any kind of outcome or impact evaluation, you need to know something about whether the changes observed (or prevented) had anything to do with the program or policy being evaluated. After all, the word “outcome” implies something that “comes out of” the program – right?

 

Week 25

Evaluators have feelings too: Two sides of the evaluation coin

BetterEvaluation recently published a new paper, ‘Two sides of the evaluation coin,’ exploring what can happen when miscommunication, changing leadership and misunderstanding disrupt the smooth running of an evaluation: and what can be done to minimise these risks. Authors from both the evaluator and commissioner side wrote the report jointly. John Rowley, who was part of the evaluation team, has blogged on the paper, saying that ‘it deals with issues that profoundly affect program evaluations but which are almost never shared in an open and public way.’ His fellow-evaluator, Pete Cranston, has also blogged about what the experience taught him about the role of evaluation in learning, and the role of failure. Now their co-author Penelope Beynon, who was a commissioner for the evaluation, shares her side of the story, and argues for the importance of recognising the emotions involved in a bumpy evaluation ride.

Week 24

Choosing methods to describe activities, results and context

How many methods do you usually see in evaluation reports as having been used to collect data? Chances are you’ll see project document review, key information interviews, surveys of some kind, and perhaps group interviews with intended beneficiaries. These methods are all useful to help describe what has happened, the outcomes and the context in which change occurred.

Week 23

Tips for delivering negative results

It’s a scenario many evaluators dread: the time has come to present your results to the commissioner, and you’ve got bad news. Failing to strike the right balance between forthrightness and diplomacy can mean you either don’t get your message across, or alienate your audience.

 

Week 22​

The latest resources and events suggested by users

While we work on the remaining blog posts on the recent AEA Coffee Break webinars, this week we're highlighting content and events recently suggested to us by users.

Huge thanks to all of our users who have been pointing out great resources and useful events, keep them coming!

May

Week 21

Framing the evaluation

What’s one of the most common mistakes in planning an evaluation? Going straight to deciding data collection methods. Before you choose data collection methods, you need a good understanding of why the evaluation is being done. We refer to this as framing the evaluation.

 

Week 20

Defining what needs to be evaluated

Whether you are commissioning an evaluation, designing one or implementing one, having - and sharing - a very clear understanding of what is being evaluated is paramount. For complicated or complex interventions this isn't always as straight forward as it sounds, which is why BetterEvaluation offers specific guidance on options for doing this.

 

Week 19

Using the Rainbow Framework

How do we ensure we address all the important aspects of an evaluation when we’re planning it? How do we manage to consider the different options without being overwhelmed?

This week I was pleased to join over 500 people in the first in our Coffee Break Webinar series with the American Evaluation Association to explore these issues.

Week 18

Reading evaluation journals

Evaluation journals play an important role in documenting, developing, and sharing theory and practice.

In this week's post, we've highlighted evaluation journals that would be useful to add to your regular reading, or to refer to for specific searches.

April

Week 17​

Analyzing data using common software

Data analysis is sometimes the weak link in an evaluation plan.  Answering key evaluation questions requires thoughtful analysis - and this needs appropriate tools.

 

 

Week 16

Identifying and documenting emergent outcomes of a global network

Global voluntary networks are complex beasts with dynamic and unpredictable actions and interactions. How can we evaluate the results of a network like this? Whose results are we even talking about? This was the challenge facing BioNET when they came to the end of their five year programme and is the subject of the second paper in the BetterEvaluation writeshop series, which we want to introduce in this weeks' blog.

Week 15

Evaluation conferences 2013

One of the most effective ways of learning about the evaluation field is to attend a conference, present your work and interact with other professionals.

 

 

Week 14

Addressing ethical issues

How do we ensure our evaluations are conducted ethically? Where do we go for advice and guidance, especially when we don't have a formal process for ethical review?

 

 

March

Week 13

Evaluation on a shoestring

Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.

 

Week 12

Having an adequate theory of change

Many evaluations use a theory of change approach, which identifies how activities are understood to contribute to a series of outcomes and impacts. These can help guide data collection, analysis and reporting.

But what if the theory of change is has gaps, leaves out important things – or is just plain wrong?

Week 11

Using rubrics

The term "rubric" is often used in education to refer to a systematic way of setting out the expectations for students in terms of what would constitute poor, good and excellent performance.

 

Week​ 10

Having a theory in the theory of change

There is increasing recognition that a theory of change can be useful when planning an evaluation. A theory of change is an explanation of how activities are understood to contribute to a series of outcomes and impacts. It might be called a program theory, an intervention logic, an outcomes hierarchy, or something else. It is usually represented in a diagram called a logic model, which can take various forms.

Week 9

Addressing complexity

This week at the Community of Evaluators South Asia’s Evaluation Conclave in Kathmandu, a number of sessions referred to the importance of addressing complexity, including the SEAChange Climate Change session on Complexity and Attribution, and Michael Quinn Patton's keynote address.

 

February

Week 8

Using Social Network Analysis for M&E

Most of the work done in development is done in collaboration, in partnership with individuals or organizations who contribute to a particular task or project we are working on. These collaborations are sometimes very straight forward, but sometimes they are quite complex, and involve many links and relationships.

 

Week 7

Evaluation associations and societies

Across the world evaluation associations provide a supportive community of practice for evaluators, evaluation managers and those who do evaluation as part of their service delivery or management job.

 

Week 6

Facilitating evaluation decisions

There are many decisions to be made in an evaluation – its purpose and scope; the key evaluation questions; how different values will be negotiated; what should be the research design and methods for data collection and analysis; how information will be shared; what recommendations should be developed and how. 

January

Week 5

Info overload - how to navigate the maze of methods and approaches

As part of developing the BetterEvaluation site, we ran an "Evaluation Challenge" process, inviting people to submit their biggest challenges in evaluation, and then inviting experts to suggest ways to address these.

This week we present the first challenge, one that is frequently heard from people when they first start learning about the field of evaluation: How can I cope with the overwhelming number of monitoring and evaluation tools?

Week 4

Including unintended impacts

Evaluation is not just about assessing whether objectives have been met.  Identifying and considering unintended impacts can be a critically important part of deciding whether or not a program, a policy or a project has been a success.  But not all guides to evaluation acknowledge the importance of unintended impacts – or give advice about methods to identify and include them.

Week 3

Q & A about drawing logic models

This week on BetterEvaluation we're presenting Questions and Answers about logic models. A logic model represents a program theory - how an intervention (such as a program, project or policy) is understood to contribute to its impacts.

 

Week 2

Conducting Effective Meetings

There are many decisions to be made in an evaluation – its purpose and scope; the key evaluation questions; how different values will be negotiated; what should be the research design and methods for data collection and analysis; how information will be shared; what recommendations should be developed and how. 

 

Week 1

Using evaluability assessment to improve Terms of Reference

Many problems with evaluations can be traced back to the Terms of Reference (ToR) - the statement of what is required in an evaluation.  Many ToRs are too vague, too ambitious, inaccurate or not appropriate.