Collaborative outcomes reporting

Available languages
Contributing author
Jess DartMegan Roberts
Contributing steward
Jess Dart

Collaborative outcomes reporting (COR) is a participatory approach to impact evaluation based around a performance story that presents evidence of how a program has contributed to outcomes and impacts, that is then reviewed by both technical experts and program stakeholders, which may include community members.

Developed by Jess Dart, COR combines contribution analysis and Multiple Lines and Levels of Evidence (MLLE), mapping existing data and additional data against the program logic to produce a performance story.  Performance story reports are essentially a short report about how a program contributed to outcomes. Although they may vary in content and format, most are short, mention program context and aims, relate to a plausible results chain, and are backed by empirical evidence (Dart and Mayne, 2005). The aim is to tell the ‘story’ of a program’s performance using multiple-lines of evidence.  

COR adds processes of review by an expert panel and stakeholders, sometimes including community members, to check for the credibility of the evidence about what impacts have occurred and the extent to which these can be credibly attributed to the intervention. It is these components of expert panel review (outcomes panel) and a collaborative approach to developing outcomes (through summit workshops) that differentiate COR from other approaches to outcome and impact evaluation. 

A PDF version of this page is also available:

 

Steps

COR uses a mixed method approach that involves participation of key stakeholders, generally in 6 process steps. Participation can occur at all stages of this process:

  1. Scoping: an inception/planning workshop is held. In this workshop the program logic is clarified, existing data is identified and evaluation questions developed.
  2. Data trawl. Can include both primary and secondary data sources. Generally, a data trawl of existing evidence is undertaken. Program staff may be enlisted to help with the collation of data.
  3. Social inquiry. Social inquiry can include any form of data gathering- qualitative or quantitative. If qualitative, volunteers who are given a short training session in interviewing and an interview guide can conduct interviews. This is a very effective way to involve staff in the data where there is sufficient enthusiasm around the process. Otherwise consultants or the evaluation managers conduct all or a proportion of the interviews. In many COR examples, the most significant change (MSC) technique is used at some point in the social inquiry process as a way of capturing stories of change, both expected and unexpected.
  4. Data analysis and integration. Quantitative and qualitative data can be analysed together according to the outcomes in the program logic. A “results chart” is often used to integrate different sets and types of data. 
  5. Outcomes panel. People with relevant scientific, technical, or sectoral knowledge are brought together and presented with a range of evidence compiled in step 4. They are then asked to assess the contribution of the intervention towards goals given the available knowledge and to explore rival hypotheses that could explain the data. It can be substituted for a citizen’s jury.
  6. Summit workshop.  At a large workshop key findings and recommendations are synthesised, and examples of changes are identified and added (using material from MSC if available, and MSC processes to select the most significant stories). The summit should involve broad participation of key stakeholders such as program staff and community members.

Collaborative Outcomes Report structure: the report aims to explore and report the extent to which a program has contributed to outcomes.  Under COR, reports are short and generally structured in terms of the following sections:

  1. A narrative section explaining the program context and rationale.
  2. A ‘results chart’ (See FAQs) summarising the achievements of a program against a program logic model.
  3. A narrative section describing the implications of the results e.g. the achievements (expected and unexpected), the issues and the recommendations.
  4. A section which provides a number of ‘vignettes’ that provide instances of significant change, usually first person narratives.
  5. An index providing more detail on the sources of evidence.

COR can be applied across multiple sectors or scales of evaluation. The approach can be particularly useful when the evaluation does not have well defined outcomes at inception, or if outcomes are emergent, complicated or complex. It has been used in a wide range of sectors from overseas development, community health, and Indigenous education, but the majority of work has occurred in the Natural Resource Management Sector, with the Australian Government funding 20 pilot studies in 2007-9.

Mapping the approach in terms of tasks and methods

Mapping our Rainbow Framework to this approach shows how collaborative outcomes reporting intersects with another evaluation process.

MANAGE | Deciding how an evaluation will be managed

Develop evaluation capacity: Involve project teams and staff in social inquiry and data trawling.

COR can help build evaluation capacity because it has a specific mandate for involving project teams and staff in the social inquiry or data gathering phase.

DEFINE | Understanding what is being evaluated

Develop program theory / logic model: A key element in the COR approach is the development of a program logic. This is used to map data and results and to tell the ‘performance story’ of the evaluation. Generally an outcomes hierarchy has been used in the examples below.

DESCRIBE | Answering descriptive questions

Combine qualitative and quantitative data: COR is well suited to combining multiple types of data, such as qualitative and qualitative. This is partially due to COR’s foundations in MLLE, but also because COR uses Expert Review and the Summit process to weave those data types together coherently and meaningfully, mapped against a program logic.

Collect/retrieve data: Data trawl.

The data trawl step draws on project records and any other existing monitoring data to inform social inquiry.

SYNTHESISE | Combining and using evidence

Synthesise data from a single evaluation: Outcomes panel, Summit workshop.

The outcomes panel ( a form of expert panel) and summit workshop both act to synthesise data within the evaluation process. 

Advice

Advice for CHOOSING this method (tips and traps) 

  • When reports are needed that provide both brief and clear messages and an easy audit trail to the evidence that substantiates these claims.  Organisations often place a high value on the reports because they strike a good balance between depth of information and brevity and are easy for staff and stakeholders to understand. They help build a credible case that a contribution has been made.
  • When there is a desire to include program staff and other stakeholders in the process and develop their evaluation capacity. The participatory process by which reports are developed offers many opportunities for staff and stakeholder capacity building. They are a great way to kick off a new monitoring and evaluation system, because they involve synthesising and reflecting on all existing data and data gaps (a great platform to think about what data is really needed!).  A COR process can be valuable for garnering buy-in and ownership for a program or evaluation process.
  • Use to answer questions about the extent to which an investment contributes to outcomes COR mainly focuses on answering this type of evaluation questions, and therefore should not be seen as the only reporting tool. Other methods and approaches are needed to answer key evaluation questions about the appropriateness of the investment or interventionor its cost-effectiveness. COR is not designed to address these questions. They may be added into this methodology in the future, but are not covered by this approach as it currently stands.
  • Stakeholders’ values will be used as the evaluative criteria: COR is based on the premise that the values of stakeholders, program staff and key stakeholders are of highest importance in an evaluation. The evaluators attempt to “bracket off” their opinions and instead present a series of data summaries to panel and summit participants for them to analyse and interpret. Values are surfaced and debated throughout the process. Participants debate the value and significance of data sources and come to agreement on the key findings of the evaluation.
  • When a program has emergent or complex outcomes that are not fully defined at the onset of a program. For this reason a program logic is refreshed at the start of the evaluation process. In addition qualitative inquiry is used to capture unexpected outcomes and deliberative processes are used to make sense of the findings. 

Advice for USING this method (tips and traps)

  • Including stakeholders needs delicate management and careful facilitation: Bringing stakeholders in at the wrong time can raise problems for the evaluation, particularly in the summit phase. Ensuring that the voices of all stakeholders who will be at the summit workshop or will have a hand in reviewing the report are included in the data collection phase is important. This can be either through social enquiry or data trawling and document review. If they aren’t included, the summit workshop can become more focused on data collection, rather than its real purpose- to ground truth and formulate recommendations.
  • Focus the key evaluations carefully. Since no evaluation can answer all of the questions the program’s stakeholders may ask, so it is really critical to prioritise questions, in the scoping phase. All participants should be clear about what is being evaluated and what the evaluation will focus on.
  • Managing the outcomes panel needs consideration of conflicts of interest: Assembling the experts in the outcomes panel also requires careful consideration, to avoid conflicts of interest. Whilst experts may, inevitably due to the nature of the evaluand, be connected to the program in some way it is important that they did not design the program themselves or were a part of it, to maintain neutrality. Furthermore, in highly political evaluands, agreement on outcomes from a panel of experts may be fraught. In this case, an expert evaluation may be a more appropriate approach than COR. 
  • Ensure that the performance story and data collection do not only focus on positive aspects of performance: COR’s have been criticised for being too appreciative, or for being incapable of telling a bad story. While this is certainly a risk, the technique does attempt to address this in a number of ways. Firstly all informants are asked to describe the strengths and the weaknesses of the program. These weaknesses or issues are documented in the report. Secondly, the outcomes panel is encouraged to report on negative as well as positive trends in terms of the outcomes. So the “negatives” are not avoided in this process.  However, the choice of topic for an outcomes report is often purposeful rather than randomly selected.
  • COR can be done on a shoe-string budget: It’s possible to reduce the cost by making the following changes to the process: Run inception meeting with planning meeting; Fewer interviews; Limit time for data trawl; combine outcomes panels with summit.; Use only secondary data; or the commissioning organisation take responsibility for producing a final report.However, the one thing that is absolutely pivotal is the Summit workshop. The summit is the space for tying the evaluation together and weaving the perspectives together. If managed effectively, the summit ground-truths the findings and the group develops recommendations. This is a core element of what sets COR apart from other approaches to outcome and impact evaluations.
  • If the program already has a strong monitoring, evaluation, reporting and improvement framework and process: you may not need all the elements of COR. If there is a clear program logic in place and a comprehensive monitoring system you may only need to add some components of the COR process, such as the outcomes panel and summit workshop. A COR process in it’s entirety is well-suited in situations where there has been ad-hoc monitoring data collected or no initial program logic developed and tested.  

FAQ about COR

  1. Where did COR come from?

COR was inspired by John Mayne’s work on Contribution Analysis. Mayne articulated the need to report against program logic and called this a performance story report (Dart and Mayne, 2005). COR took this concept and extended it in a stepped out process and specific reporting product. Part of this was intentionally participatory and used MLLE.

  1. What is a results chart?

A results chart can tell the story of change from a program through lines of evidence generated from the data trawl and social inquiry process. It is a table that plots activities, outputs and outcomes next to evidence collected.

Resources

Guides

Examples

Last updated:

Expand to view all resources related to 'Collaborative outcomes reporting'

'Collaborative outcomes reporting' is referenced in: