Specify the Key Evaluation Questions

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer - not specific questions that are asked in an interview or a questionnaire. Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it.

KEQs usually need to be developed and agreed on at the beginning of evaluation planning - however sometimes KEQs are already prescribed by an evaluation system or a previously developed evaluation framework. 

Try not to have too many Key Evaluation Questions - a maximum of 5-7 main questions will be sufficient. It might also be useful to have some more specific questions under the KEQs.

Key Evaluation Questions should be developed by considering  the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used.  In particular, it can be helpful to imagine scenarios where the answers to the KEQs being used - to check the KEQs are likely to be relevant and useful and that they cover the range of issues that the evaluation is intended to address.  (This process can also help to review the types of data that might be feasible and credible to use to answer the KEQs).

The following information has been taken from the New South Wales Government, Department of Premier and Cabinet Evaluation Toolkit, which BetterEvaluation helped to develop.

Here are some typical key evaluation questions for the 3 main types of evaluation:

Key evaluation questions for the main types of evaluation 

Type Typical key evaluation questions

Process evaluation

How is the program being implemented?
How appropriate are the processes compared with quality standards?
Is the program being implemented correctly?
Are participants being reached as intended?
How satisfied are program clients? For which clients?
What has been done in an innovative way?

Outcome evaluation (or impact evaluation) 

How well did the program work?
Did the program produce or contribute to the intended outcomes in the short, medium and long term?
For whom, in what ways and in what circumstances? What unintended outcomes (positive and negative) were produced?
To what extent can changes be attributed to the program? 
What were the particular features of the program and context that made a difference?
What was the influence of other factors?

Economic evaluation (cost-effectiveness analysis and cost-benefit analysis)

What has been the ratio of costs to benefits?
What is the most cost-effective option?
Has the intervention been cost-effective (compared to alternatives)?
Is the program the best use of resources?

 

Appropriateness, effectiveness and efficiency

Three broad categories of key evaluation questions to assess whether the program is appropriate, effective and efficient are often used.

Organising key evaluation questions under these categories, allows an assessment of the degree to which a particular program in particular circumstances is appropriate, effective and efficient. Suitable questions under these categories will vary with the different types of evaluation (process, outcome or economic). 

  Typical key evaluation questions
Appropriateness  To what extent does the program address an identified need?How well does the program align with government and agency priorities?

Does the program represent a legitimate role for government?

 Effectiveness To what extent is the program achieving the intended outcomes, in the short, medium and long term?
To what extent is the program producing worthwhile results (outputs, outcomes) and/or meeting each of its objectives?
Efficiency Do the outcomes of the program represent value for money?
To what extent is the relationship between inputs and outputs timely, cost-effective and to expected standards?

Example

The Evaluation of the Stronger Families and Communities Strategy used clear Key Evaluation Questions to ensure a coherent evaluation despite the scale and diversity of what was being evaluated – an evaluation over 3 years, covering more than 600 different projects funded through 5 different funding initiatives, and producing 7 issues papers and 11 case study reports (including studies of particular funding initiatives) as well as ongoing progress reports and a final report.  

The Key Evaluation Questions were developed through an extensive consultative process to develop the evaluation framework, which was done before advertising the contract to conduct the actual evaluation.

1. How is the Strategy contributing to family and community strength in the short-term, medium-term, and longer-term?

2. To what extent has the Strategy produced unintended outcomes (positive and negative)?

3. What were the costs and benefits of the Strategy relative to similar national and international interventions? (Given data limitations, this was revised to ask the question in ‘broad, qualitative terms’

4. What were the particular features of the Strategy that made a difference?

5. What is helping or hindering the initiatives to achieve their objectives? What explains why some initiatives work? In particular, does the interaction between different initiatives contribute to achieving better outcomes?

 6. How does the Strategy contribute to the achievement of outcomes in conjunction with other initiatives, programs or services in the area?

 7. What else is helping or hindering the Strategy to achieve its objectives and outcomes? What works best for whom, why and when?

8. How can the Strategy achieve better outcomes?

CIRCLE (2008) Stronger Families and Communities Strategy 2000-2004: Final Report. Melbourne: RMIT University. 

The KEQs were used to structure progress reports and the final report, providing a clear framework for bringing together diverse evidence and an emerging narrative about the findings.

The Managers' Guide

Coming at this from a manager or commissioner's perspective? Step 2: Scope the evaluation in our Managers' Guide has some specific information geared towards making decisions about what the evaluation needs to do, including how to develop agreed key evaluation questions
 
Inline image 1

Resources

Guides

Tools

KEQ Checklists

  • CDC: Checklist to help focus your evaluation: This checklist, created by the Centers for Disease Control and Prevention (CDC), helps you to assess potential evaluation questions in terms of their relevance, feasibility, fit with the values, nature and theory of change of the program, and the level of stakeholder engagement.
  • Evaluation Checklist for Program Evaluation: This checklist by Lori Wingate and Daniela Schroeter's distills and explains criteria for effective evaluation questions. It can be used to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. It identifies characteristics of good evaluation questions, based on the relevant literature and the author's own experience with evaluation design, implementation, and use. 

Examples

 

 

Cite this page

BetterEvaluation. (2016) Specify the Key Evaluation Questions (KEQs). Retrieved from: http://betterevaluation.org/en/plan/engage_frame/decide_evaluation_quest...

Comments

Anonymous's picture
Stephen Berson

This page does an excellent job of providing a large number of practical questions that can be used across different types of evaluations. However, it doesn't distinguish between research questions and evaluation questions. This differentiation is crucial to ensure that what we are calling an evaluation is not merely a piece of applied research. For something to count as an evaluation, it must investigate questions related to merit worth and significance (Scriven, 2015) or in simpler terms, the "goodness of a program" (Gullickson, 2018) and not simply describing what is happening in a program (or other form of evaluand). This issue is explored in depth in Nunns, Peace, and Witten's 2015 article Evaluative reasoning in public-sector evaluation in Aotearoa New Zealand.

As Gullickson (2018) describes "often what is called an evaluation question is simply a research question, which just requires plain descriptive or causal answers...In the hierarchy of evaluation, research questions serve and provide information to answer evaluation questions." Research questions, which are merely descriptive, should be embedded inside evaluation questions as follows:

EQ1->RQ1

      ->RQ2

 

EQ2->EQ2a->RQ3

      ->EQ2b->RQ4

                 ->RQ5

As Gullickson further describes, "evaluation questions get at the heart of what makes the evaluand good, valuable or worthwhile. They go directly to questions of merit, worth, and/or significance. 

Common characteristics shared by research and evaluation questions:

  • Focus the question asker on a specific area of interest Relevant to the object in question
  • Facilitate the design of data collection
  • Are answerable (i.e. measureable, assessable, or observable)
  • Are worthwhile (i.e. someone cares about the answer)
  • Have more than one answer (i.e. you haven’t pre-determined the answer with the question you’ve asked).

Characteristics that make a question an evaluation question:

  • Identifies distinct dimensions of performance related to the value of the object
  • Addresses issues of quality, cost, significance, satisfaction, success, effectiveness, efficiency, sustainability, exportability, areas for improvement, outcomes and impact (including unintended ones), etc.

When you are preparing for an evaluation (and preparing your clients) checking your questions against these last two characteristics can be a good litmus test of whether they want an evaluation—which comes with a judgement—or just research to tell them what’s going on with their programs. Either can serve their needs, but one isn’t evaluation (by our definition)—and you need to ascertain if you and the client are on the same page about what they need" (Gullickson, 2018)" (Gullickson, 2018).

Some examples of how research questions can be transformed into evaluation questions are provided below (Gullickson, 2018)

Research questions

  1. "What needs does this evaluand address?
  2. How was the program implemented?
  3. What are the outcomes? Do they demonstrate statistically significant changes?
  4. What are the components of this evaluand and how do they relate to the expected outcomes?" (Gullickson, 2018)

Research questions transformed into evaluation questions

  1. "How large or important are the needs that the evaluand addresses?
  2. How well was the program implemented (fidelity to model, quality of delivery, cost compared to similar implementers)?
  3. Which of the evaluand's outcomes are most important? According to whose values? On what evidence should that be determined?
  4. What components of this evaluand contributed the most value to the most important outcomes?" (Gullickson, 2018)

 

References

Gullickson, A. (2018). Practice of Evaluation [course materials]. Melbourne, Victoria: University of Melbourne, EDUC90847.

Nunns, Peace, and Witten (2015). Evaluative reasoning in public-sector evaluation in Aotearoa New Zealand: How are we doing? Evaluation Matters—He Take Tō Te Aromatawai. New Zealand Council for Educational Research 1: 2015 http://www.nzcer.org.nz/system/files/journals/evaluationmaters/downloads...

Scriven, M. (2015) Key Evaluation Checklist. Retrieved from https://wmich.edu/evaluation/checklists

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.