Week 9: Innovation in evaluation part 3: what’s the latest in advocacy evaluation?

Julia Coffman's picture 03rd March 2014 by Julia Coffman
Tags: 
innovation

Julia Coffman is Director of the Centre for Evaluation Innovation. In the third blog of our innovation in evaluation series, she looks some recent innovations in a notoriously tricky area: advocacy evaluation. Last week, Thomas Winderl explored how development evaluation must evolve to meet the challenge of complexity and responsive planning. This week we’ll be reporting from the African Evaluation Association’s 7th international conference, where BetterEvaluation is supporting a strand of conference presentations and posters on methodological innovation.

The Center for Evaluation Innovation (CEI) is a nonprofit that was founded to push evaluation practice in new directions and into new arenas. We specialize in building the evaluation field in areas that are challenging to assess, like advocacy and policy change, and have been working in and writing about the advocacy evaluation field for years.

In discussing advocacy evaluation recently, my colleagues and I observed that it seemed that our thinking about advocacy evaluation has evolved in recent years, in large part because this once-fledgling field has grown considerably in the last decade. Evaluators now are innovating constantly and challenging our assumptions about what is and is not possible.

We think about innovation simply as the discovery of new ways of thinking about and doing things. While we could identify many innovations in the advocacy evaluation field that fit this definition, we focus here on innovation in one particular area—assessments of the impact or influence of advocacy efforts on public policy.

When the advocacy evaluation field began, many of us were perplexed about how to credibly examine advocacy’s impact. In part this was because the strongest design option for examining cause-and-effect—an experiment—is difficult to apply with advocacy. The concepts of defined and bounded interventions, random assignment, and control groups do not translate well to an advocacy context where timing is unpredictable and the environments in which advocates operate are complex and cannot easily be manipulated. Experimental designs require more control than advocacy efforts allow.

But experiments are not the only credible option for examining impact. Thinking on how to assess advocacy impact has since evolved and is being tested. We want to highlight a few promising developments.

Survey with Placebo

Researchers at the Brookings Institution’s Brown Center on Education Policy used a new method, Survey with Placebo, to evaluate the influence of advocacy on a specific policy outcome—the passage of school choice legislation in the state of Louisiana in the U.S. This method asks policy-makers and political insider respondents to rate the influence of several advocacy organizations on a particular policy outcome.

The twist is that researchers include a nonexistent advocacy group (placebo) in the mix of organizations tested. Ratings for the placebo offer a “zero point” on the influence scale, against which real advocacy organizations can be compared. This allows researchers to quantify the amount of an organization’s influence and to test if differences between the scores of advocacy organizations are statistically significant. The survey also collects data on how those organizations have influence, or which channels used are influential.

Process Tracing

Oxfam Great Britain (GB) is a global development organization that fights poverty and injustice, in part through numerous advocacy campaigns designed to influence public policy on these issues. To understand if they are having impact, Oxfam GB is experimenting with process tracing, a qualitative research methodology.

Each year the organization randomly selects about eight campaigns for impact assessment. An external evaluator conducts the assessment using Oxfam GB’s process tracing protocol to:

  1. look for evidence linking the campaign with the policy outcome;
  2. look at other "causal stories" of change to understand which theory of change is best supported by evidence;
  3. in light of the evidence about competing explanations, draw conclusions about the significance of the advocacy campaign’s contribution.

Oxfam GB is in year three of this process and is learning a great deal about the approach and its advantages and disadvantages. It also is publishing the results.

Multimethod Design

Mathematica Policy Research uses multiple qualitative and quantitative methods to answer particular sub-questions about advocacy impact, and then interprets the results collectively to make a conclusion about overall impact. Researchers explore these kinds of analyses together:

  • Changes over time in advocacy organization capacity—previous research had shown that advocacy capacity improvements were associated with an increase in influence.
  • Relationships between advocacy organizations—the exchange of information and alignment among advocates are considered drivers of effective advocacy.
  • Temporal patterns between advocacy activity and adoption of policy outcomes—this provides a mixed-methods test of whether there is a plausible basis for linking advocacy activities to the policies targeted.
  • Policy-maker and stakeholder views of the influence and role of advocacy efforts in achieving policy changes—offering descriptive statistics and an examination of patterns within and across the locations by respondent role and political affiliation.
  • Policy implementation trends in target locations relative to the country as a whole—using secondary data to explore the influence of advocacy campaigns.

The process of innovation means that evaluators will continue to test out, challenge, and modify these approaches and others until we know more about how they work in different contexts, how much useful information they produce, and how much explanatory value they have. We look forward to seeing how our own thinking, and the field’s, continues to evolve.

Read more

Evaluating policy influence and advocacy

Check out our thematic page on this particular focus for evaluation with an introduction to a range of methods that can be used.

Read more

Process tracing

This option for causal inference involves a case-based approach to causal inference which focuses on the use of clues within a case (causal-process observations, CPOs) to adjudicate between alternative possible explanations.

Read more

Mixed methods

There are different reasons for and ways of mixing methods. See more detail at the task page on combining qualitative and quantitative data.

Read more

Image: "Tatu Abdi - campaigning for women's land rights", Oxfam East Africa

52 Weeks of BetterEvaluation. Click here to view past features

A special thanks to this page's contributors
Author
Director, Center for Evaluation Innovation.
United States of America.

Comments

There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.