52 weeks of BetterEvaluation: Week 13: Evaluation on a shoestring

By
Patricia Rogers

Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.

Many organisations are having to find ways of doing more for less – including doing evaluation with fewer resources. This can mean little money (or no money) to engage external expertise and a need to rely on resources internal to an organisation – specifically people who might also have less time to devote to evaluation.

This week’s post draws on issues and strategies from a recent session with the South Australian branch of the Australasian Evaluation Society who were interested in exploring how BetterEvaluation could support ‘evaluation on a shoestring’.

Do you have other tips. strategies or resources you've found useful in this situation?  

Types of shoestring evaluation​

Doing an evaluation ‘on a shoestring’ refers to conducting it within limited resources. There are different levels of limited resources.

Doing evaluation with "less string" might mean reducing the budget for an external evaluation, and/or the degree of support available from internal sources such as an evaluation unit. The key challenge here is reducing the scope appropriately. Don't keep the scope of the evaluation the same and expect it to be done for a fraction of the budget. Don’t try to take shortcuts with the initial scoping of the evaluation, or in the active management of the evaluation. Be clear about the trade-offs between depth and breadth, timeliness and comprehensiveness in the design and keep focused on the most important priorities. In this scenario, pay particular attention to suggestions 6,7 and 8 below.

Doing evaluation with "very little string" might mean having no budget to contract an evaluation externally, but having some scope to access resources (either internally or externally) for advice and quality assurance. The key challenge here is making best use of these. In this scenario, use all the suggestions to plan the evaluation effectively and then use available resources to for meta-evaluation (see suggestion 12 below) at critical stages - when finalising the evaluation brief, the design and the report. Use the BetterEvaluation Rainbow Framework, which outlines 32 tasks in an evaluation, and related material on the BetterEvaluation site for guidance on doing all of these.

Doing evaluation with "no string" has the same challenges as “very little string” but without any scope to engage external expertise for advice and quality control. In addition to the advice above, try to identify a peer with whom you can do reciprocal peer review at critical stages.  Having to articulate and explain the decisions made for each of the 32 tasks can improve the quality. See if you can secure some pro bono advice at critical stages.

12 suggestions for evaluation on a shoestring

The suggestions relate to four different stages of an evaluation.

(Click on the resources below to view them)

1. Scoping 2. Designing 3. Conducting 4. Using
Evaluation brief Evaluation design Evaluation report Evaluation use
Clarifying what the evaluation needs to do – purpose, intended uses, Key Evaluation Questions, standards for the evaluation, what ‘success’ looks like for what is being evaluated How the evaluation will answer the Key Evaluation Questions – data collection, analysis and reporting Collecting, analysing and reporting data in terms of the Key Evaluation Questions Disseminating findings, developing recommendations (if not done as part of the report), developing plans and tracking them

Done by commissioners.

Might have external advice and facilitation

Can be done by commissioners (as part of RFP) or by evaluators (as part of proposal, as first stage of evaluation, or as a separate project) Done by evaluators, sometimes with support from commissioners Done by commissioners, possibly with support from evaluators
BetterEvaluation site: Manage, Frame BetterEvaluation site: Describe, Understand Causes, Synthesize (for choosing options and approaches) BetterEvaluation site: DescribeUnderstand CausesSynthesize (for using options and approaches well) BetterEvaluation site: Report and Support Use

Evaluation Brief

1. Purpose – I​dentify and address the priority intended uses of primary intended users

Primary intended users are the specific, identified people who will use the findings from the evaluation. Ideally they should be actively engaged in the evaluation decision making process to ensure it will be relevant and credible to them.

 

Identifying the Intended User(s) and Use(s) of an Evaluation

A short guide produced by IDRC (International Development Research Centre, Canada).

2. Focus – ​Develop a small set of clear, answerable and useful Key Evaluation Questions

KEQs are not interview questions but the high level questions an evaluation is intended to answer. They are usually a combination of Descriptive Questions (What is the situation? What has happened?), Causal Questions (Did the program contribute to or cause the observed outcomes?), and Evaluative Questions (Was it good, good enough, better than before, better than alternatives?).

 

Advice for commissioners of evaluation to get maximum value from external evaluators

A presentation to the ANZEA conference (Aotearoa/New Zealand Evaluation Association) by E. Jane Davidson & Nan Wehipeihana (2010) which outlines some generic evaluation questions.

3. Resources – Clarify what resources will be available for the evaluation

Resources include the time of internal staff, especially those with evaluation expertise and/or content knowledge, and funding to engage external expertise, and for costs incurred in data collection and analysis (e.g. software). It also includes the available time before reports are needed. If no funding is available for external resources, and internal resources are not sufficient, investigate options for reciprocal help with evaluations in other programs or organisations, or options to get assistance from universities or groups.

 

4. Standards – Be clear about the level of accuracy and generalizability needed

What will be ‘good enough’ data?

 

Evaluation Design

5. Program Theory – Develop an explanation of how the program is understood to contribute to its intended outcomes and impacts

This can be in the form of inputs->processes ->outputs->outcomes->impacts if the program is fairly simple and all activities are done at the beginning of the causal chain. Otherwise an ‘outcomes hierarchy’ format might work better to show how different activities contribute to particular interim outcomes.

6. Coverage – Plan data collection and analysis in terms of a matrix of options

A matrix with Key Evaluation Questions down one side, and possible data sources across the top will make it easier to plan for efficient data collection that covers all questions.

The Evaluation Matrix

7. Existing Data – Make maximum possible use of existing data if quality is adequate

This includes project documentation, performance indicators, documented observations, social indicators, and findings and methods from other relevant evaluations.

8. Short Cuts – Identify where it will be possible to take shortcuts in data collection and analysis

For example, it might be possible to reduce the cost of an evaluation by: conducting group interviews instead of individual interviews; reconstructing baseline data through retrospective recall; reducing sample sizes; using email questionnaires instead of interviews; using a small purposeful sample instead of a large random sample; using volunteer interviewers instead of professionals (either staff or community members); accessing data or respondents through links with partner organisations; using existing computer software for analysis (e.g. Excel and Word) rather than specialist software not available in the organisation (e.g. SPSS and NVivo).

 

Designing quality impact evaluations under budget, time and data constraints

Michael Bamberger (2005)

 

Simplifying Qualitative Data Analysis Using General Purpose Software Tools

La Pelle, Nancy (2004)

9. Risk Management – Identify possible risks and trial collection, analysis and reporting

Any design can have unforeseen problems in implementation. Build in a short cycle of collecting, analysing and discussing some real data before finalising the design, and always trial and pilot data collection tools and analysis strategies with hypothetical or early data. Back up all data and store securely. Check for unrepresentative samples (especially if you have a low response rate) and triangulate data to improve validity.

 

Mistakes not to make. International suggestions on Genuine Evaluation

Patricia J Rogers and E Jane Davidson blog about real, genuine, authentic, practical evaluation

Evaluation Report

10. Messaging – Develop a report outline early on and negotiate agreement about format

Clarify what is required in the evaluation report well before starting to write it.

 

 

Improving evaluation questions and answers: Getting actionable answers for real-world decision makers

A presentation from E. Jane Davidson (2009) to the AEA (American Evaluation Association) conference which explains how to use a skeleton report to negotiate the format of a final report. Available through the AEA elibrary.

Supporting Use

11. Actively Plan for Use – Implement specific activities to support users to understand and use findings

Simply releas​ing a report is not sufficient. Use different avenues, processes and formats to make the findings readily available to intended primary users.

Overall

12. Meta-evaluate – Build in formative and summative evaluation of the evaluation

Formative evaluation can improve the evaluation brief, the evaluation design, and the evaluation report. If you have limited scope to engage external expertise, focus it here. If you have no budget for external review of the evaluation, see if you can engage with partners, colleagues or others to review the evaluation at key stages. Finish the process by documenting learnings about doing evaluation – perhaps through an after action review.

 

'52 weeks of BetterEvaluation: Week 13: Evaluation on a shoestring' is referenced in: