Guest blog: Why rubrics are useful in evaluations

Judy Oakden's picture 13th March 2013 by Judy Oakden
Tags: 
rubrics
new zealand
example

Judy Oakden is an independent evaluator from Aotearoa New Zealand who runs her own consultancy and is a member of the Kinnect Group. She was one of ten participants in the BetterEvaluation writeshop initiative, led by Irene Guijt, which facilitated evaluation practitioners to write up their valuable experiences. Judy's paper is the first in the series to be published.

In Aoteoroa New Zealand the use of rubrics has been adopted across a number of institutions to help ensure there is transparent and clear assessment which respects and includes diverse lines of evidence in evaluation.  This case, written as part of the BetterEvaluation writeshop process, discusses how the use of rubrics was helpful throughout all stages of an evaluation of the First-time principals’ Induction Programme.

[Editor's note: see also Patricia Rogers' recent blog post for an introduction to rubrics]

Why we used rubrics in the evaluation

The Ministry of Education required this evaluation on a short time-frame, with a tight budget.  This case describes how the use of rubrics supported us to undertake the evaluation in that context.  In particular we chose to use rubrics for this project as we believed that the process of developing the rubrics would help us to reach a shared understanding with key stakeholders at the start of the evaluation of what aspects of performance matter to them and what the levels of performance (for instance, what poor, good or excellent) might look like. We also expected the use of rubrics to help us identify and collect credible evidence that answers the important evaluation questions and to provide a framework for synthesising data for reporting on results in an efficient and effective manner that is useful to the client.

The paper uses the BetterEvaluation Rainbow Framework to describe how we developed the rubrics for this evaluation and how they were used with other evaluation methods to make relevant and meaningful assessments.

Lessons learned abou​t rubrics

Since undertaking this evaluation back in 2008 I've completed a number of other evaluations using this or a similar approach and new ‘lessons learned’ in the use of rubrics have emerged. These are:

Rubrics can help frame the evaluatio​n: At the start of an evaluation the development of evaluative criteria and rubrics can help frame the evaluation and set the boundaries, particularly in complex evaluations.

Rubrics are not set in stone: It’s important to set client expectations that the rubrics may not ‘finalised’ till near the end of the evaluation.  At times rubrics need to be amended as we learn more during an evaluation.  Sometimes additional evaluation criteria need to be added as we learn more about the programme or service.

Rubrics can aid in the development of shared understanding amongst stakeholdersWhere stakeholders are involved in the development of rubrics, they appear to have a greater understanding of what the evaluation will cover and what will constitute’ good’ or ‘poor’ levels of performance i.e. the basis on which judgments of performance or effectiveness, etc will be made.

Rubrics can aid in the development of efficient and effective data collection strategiesFor the evaluator, once developed, rubrics can enable a more integrated approach to data collection to answer the evaluation questions. It becomes clear where existing data can be used and where new data collection and/ or interviews are needed.

Rubrics can aid data synthesis: Data synthesis can be efficient when mapped against the evaluative criteria.  The rubrics can be an effective tool to help layer and interpret the evidence. Clients can have the opportunity to be involved in the judgement making stage, and hence gain a better understanding of how the process is undertaken - aiding in transparency. 

Rubrics can provide a useful reporting framework that answers the important questions: Reporting can be developed specifically to answer the evaluation questions. Clients have told me that the report when framed by the evaluative criteria is very focused and actionable.  Clients have also told me they like the way transparent judgements are made, they are not left trying to figure out for themselves if the result 'good enough' or not.

Challenges with using rubrics

While rubrics have mostly been helpful there are times when their use can be challenging:  

Not all stakeholder groups can work effectively with rubricsSometimes it is hard to get agreement on the key aspects of performance, or what constitutes ‘good’ performance amongst stakeholders.

Rubrics support a participatory process and not all stakeholders want to engage in this mannerNot all stakeholders have the time or inclination to work with evaluators in a participatory manner to develop up the rubrics for their evaluation.  It is still possible to develop rubrics from the literature, and from other sources, but these still need to be signed off with the client.

At times it can be challenging to prioritise the sources of data that are considered the most credible for the evaluationSometimes the there is considerable data which can be used and it is not always easy to prioritise or determine the most credible sources. With large amounts of data, synthesis can be complex and time consuming. 

Share your experiences, comm​ents and questions

So those are my thoughts. For those of you have also used evaluation rubrics I’d be keen to hear what other people have learned:

When do you find evaluation rubrics work well?

What are some of the tips and traps you have discovered in the use of evaluation rubrics?

Are there times when you might not use evaluation rubric?

[Remember to log in so you can add comments or questions directly to the page - or comment via Add Content]

A special thanks to this page's contributors
Author
Director, Judy Oakden Consultancy - a member of the Kinnect Group.
New Zealand.

Comments

Patricia Rogers's picture
Patricia Rogers

Thanks for sharing your experiences in using rubrics.

One of the questions that has often come up in discussions about rubrics is this - How do you make sure the rubric addresses the right issues and has the right standards - and isn't just someone's arbitrary and subjective ideas about what would constitute success?

Judy Oakden's picture
Judy Oakden

What a great question! If you refer to the paper I have written you will see there were a number of ways we ensured the rubrics addressed the right issues and had the right standards. Firstly we developed up an intervention logic to identify the most appropriate part of the programme to evaluate (based on the timing and budget limitations, and taking into account the context of a constantly evolving programme).

We also talked with the key stakeholders in the Ministry, key training providers and some end users (principals) and drew on the literature, to determine the aspects of performance to be evaluated. At that stage, it was important to talk to the right people – those with both a depth of knowledge of the programme, and those who had a strategic view of the programme and its ongoing development. Indeed getting the right people in the room at the start can be one of the challenges, particularly where stakeholders are very busy.

One of the benefits of this early discussion is that it enables key stakeholders to surface what is valuable to them in relation to the programme. Different people may value different things, and it is useful to uncover this before data collection commences. Sometimes it requires careful navigation or negotiation to arrive at the list of aspects of performance that are evaluated.

To arrive at the appropriate levels of performance we might ask a range of stakeholders – if this programme was really successful, what would we see, hear or feel? And if this programme was performing poorly, what would we see, hear or feel? If needed, we can also draw on the literature, or on an expert or stakeholder panel to help inform the appropriate levels of performance. At the same time it is also important to identify what credible evidence looks like to the different stakeholders and ensure that data collected is as robust as possible.

Sheila B Robinson's picture
Sheila B Robinson

I especially like that you write about the educative nature of rubric development and how this can contribute to shared understanding of what is being measured. I also appreciate that you mention that rubrics are not set in stone and can be amended as needed. This is something I think is not often mentioned in literature on rubrics. Some feel that once a rubric is developed, it must be used as is. I also agree with your list of challenges and I think it's important to understand that effective rubrics are not necessarily easy to create. They take time and consideration and I think, are best when collaboratively developed. Thanks for a great post on this topic!

Judy Oakden's picture
Judy Oakden

Thanks for your comments Sheila.  It is useful to know that you are experiencing similar benefits and challenges when using rubrics.  I agree that 'effective rubrics are not necessarily easy to create' - and yet when they are done well, don't they look beguilingly simple!  

 

Patricia Rogers's picture
Patricia Rogers

Judy, I have one more question about rubrics, which often comes up when I suggest them to people.   How do you make sure people can't just "game" the system - changing their behaviour to score well on the rubric without actually improving their performance?

Judy Oakden's picture
Judy Oakden

 

By deciding the aspects of performance to be evaluated and developing the levels of performance with a range of stakeholders (possibly informed by the literature or by expert panels) it is quite difficult to ‘game’ the system.
 
Furthermore the evaluation does not rely on just one source of evidence about an aspect of performance, and this triangulation from multiple sources ensures multiple perspectives are considered in the evaluation. Often past milestone reports, interviews with key opinion formers and interviews with service users are all sources of evidence. Thus the data spans a range of timeframes, and includes a range of perspectives. This makes it harder for a particular group to change their behaviour to score well on the rubric without actually improving their performance. Jane Davidson describes a synthesis methodology in her book Evaluation methodology basics: the nuts and bolts of sound evaluation for combining a range of different types of data to make a holistic judgement about the level of performance.
 
Furthermore, as much of our evaluation is retrospective, often the evaluative criteria were unknown at the time people engaged with the system.
Mathea's picture
Mathea Roorda

Hi Judy, what a great resource your paper is - thanks for sharing. Based on your experience, what do you think are the key ingredients to developing defensible evaluative criteria?

Judy Oakden's picture
Judy Oakden

 

Hi Mathea,
 
Glad that you found my paper useful and thanks for your question.
 
When I think of developing defensible evaluative criteria, I think of how we might develop aspects of performance that are both justifiable and can be supported by argument.
 
Taking for example the programme described in the paper, the evaluative criteria or aspects of performance were developed to:
 
  • focus on the key aspects being evaluated (on short term outcomes in this instance)
  • provide sufficient and coherent coverage of the key aspects of the programme (which was challenging given there were so many components of the programme)
  • include key values of a number of stakeholders (these values came from discussions with the Ministry, key providers, key opinion formers and an earlier cohort of principals)
  • and written in a way that was easy to understand
 
These were some if the key ingredients for developing defensible evaluative criteria in this instance. What do others find are key ingredients to developing evaluative criteria in their evaluation practice?
Mathea's picture
Mathea Roorda

Hi again Judy. 

It's great to have the opportunity to reflect on this aspect of my practice - thanks! Your second bullet point raises a question (ok, several) I've been concerned about for a while.  I wonder whether I (we?) tend to rely too much on stakeholders for sourcing values. I recently went back to review a rubric from an old evaluation, with Scriven's KEC list at hand, and came up with several other values (legal and ethical requirements, cultural standards) we could have included that were not identified by any stakeholder group. Some surfaced in the evaluation, but we alas we didn't develop explicit criteria for them.

I am interested in other people's comments about this, and also other ingredients to develop defensible criteria. 

 

Judy Oakden's picture
Judy Oakden

 

Hi Mathea,
 
You raise a very interesting question - do we rely on stakeholders too much to source values?
 
As Jean King would say, "that depends".  I think it's really appropriate to draw from stakeholders particularly in community development work. For this project we were lucky to also have the Best Evidence Synthesis on Leadership as a source.
 
Thinking about the three different types of values you mentioned in Scriven's KEQ checklist, for this evaluation we did include a cultural aspect of performance and looked for the extent to which First-time principals focused on equity for Maori. This was informed by Ka Hikitea, the Maori education strategy.
 
Further, including Maori cultural capability right through the evaluation, Maori evaluators had the opportunity to judge for themselves the extent to which they felt there was credible evidence that First-time principals had a focus on equity for Maori. Maori evaluators judgements were not just limited to the specifically cultural aspects of the evaluation, but also feed into reporting across all evaluative criteria.
 
We did not include a legal or an ethical evaluative criteria for this project, as I do not think they were not warranted at the time. However, I could imagine a number of scenarios, particularly around delivery of differentiated services, where it might be important to build these values into the evaluative criteria. For me, legal or an ethical performance aspects are often values we think about more from the perspective of the evaluation process, focusing  on our own practice.
 
Anyway, those are my thoughts. Like you Mathea, I'd be interested in what others think or have experienced and ideas others might have about ingredients to develop defensible evaluative criteria.
Kate McKegg's picture
Kate McKegg

Great resource Judy!  And an interesting comment thread.

I would like to respond to Mathea's discussion / question about whether we rely too much on stakeholders when we build rubrics.  From my perspective, if we are aiming to develop credible, justifiable, defensible rubrics, then I would have thought that it will be people (stakeholders) who decide if we have done so.  Even if we do look to other resources and places to ensure the adequate coverage, ultimately our stakeholders will be the ones who decide whether the evaluative criteria and rubric is a fair enough representation of what is valuable about a particular evaluand, won't they? 

Judy Oakden's picture
Judy Oakden

Great comment Kate that the stakeholders' participation is vital and outweighs other possible resources as a valuable source in the development of "credible, justifiable, defensible" evaluative criteria and rubrics.

I'd be interested in what others think? 

Please login or register to comment