Week 48: The value iceberg

52-weeks-of-BE_The-value-iceberg_0.png

Efforts to measure, quantify and compare the 'value' of different interventions have become popular as a way for social change organisations to decide how to use their time and money.

However, such approaches – unless very carefully designed – risk disadvantaging advocacy relative to more direct interventions, such as providing services.

This is problematic because, through the way it addresses barriers to change, advocacy has the potential to achieve or unlock disproportionate results. It's fundamentally about power: advocacy addresses all the difficult stuff that isn’t susceptible to easy or settled resolution. It's inherently more speculative than direct interventions, and the benefits are less easily articulated.

In value terms, advocacy is an iceberg. The most significant benefits are often submerged: difficult to measure, to monetise and sometimes even to see.

It’s right to anchor advocacy to rigorous assessment. But calculations of value can risk focusing only on the part that is visible, generating misleading information and encouraging poor decision-making.

How, then, to weigh the benefits of advocacy?

1. Develop a robust strategic worldview

How you would answer questions about value and impact depends on how you see the world, how you believe change happens and the role that you believe your organisation best plays in it. This understanding should inform how and what you invest in advocacy.

2. Recognise advocacy as inherently speculative

Not all advocacy efforts will pay off, so plan for, and take a long term view of, “aggregate return” on advocacy rather than focusing on individual successes (or otherwise).

3.Look to create the conditions for effective advocacy

Campaigners should do more to critically assess the elements that make advocacy more likely to be effective - even if results can’t be guaranteed.

4.Do the simple things right

Build in reflective processes, such as action reviews, that expand analytic capacity and feed learning into future work.

5.Make sure comparisons are meaningful

'Value' as a comparative lens may not offer a level playing field.

Advocacy may be inherently disadvantaged by a straight comparison with more direct interventions. And comparisons between campaigns could mean more straightforward, shorter-term, less ambitious campaigns appearing more attractive; whilst comparing the effectiveness of tactics within a campaign misses the basic truism that the individual pieces don’t add up geometrically, they interact radically.

6.Distil meaningfully

Boiling information down to the basics aids communication, and all the better if it can be quantified. But not if the numbers, instead of being an aid to strategic decision-making, end up being a substitute for it.

Quantifying qualitative information can strip information of the very detail that gives it value. It can also convey a false sense of precision and objectivity. For these reasons, we should avoid narrative-free data.

7.Institutionalise double loop learning

Advocacy is highly adaptive. It involves using navigational tools, rather than hard and fast formulas, and relying on rapid processing of intelligence and experimentation. This way of working can feed into other programmes in ways that bolster the capacity to operate in a changing world.

8.Seek ‘crowd-sourced’ wisdom rather than objective truth

There are good ways to be evidence-based in trying to determine outcomes, their significance and the factors that helped bring them about. But advocacy operates in contested spaces and so proof is generally elusive. So, build a subjective evidence base but make sure it is as robust as possible by:

  • Asking the right questions
  • Asking the right people
  • Filtering responses based on a wider understanding of motivations at play.

Looking more widely, investing in sector-wide research can provide information that many individual organisations can’t afford to commission.

9.Embed multiple accountabilities

Groups to whom practitioners are ‘upwardly accountable’ - funders, senior managers and board members - are the constituencies most likely to be interested in ‘results’. Establishing multi-directional accountability, with wider audiences involved in determining what's important, and what’s meaningful, means that the interest in ‘results’ could be better contextualised.

10.Add to the innovation ‘to do’ list

Evaluators, campaigners and funders can play a part in developing strategies and tools. For example:

  • Supporting cross-sectoral collaboration on strategies for looking at ways to assess the sub-aquatic elements of the campaigning iceberg, and
  • Collating an evidence base of campaigning successes, as a sectoral resource.

Advocacy and campaigning outcomes are often hidden below the surface. Given that results are more speculative, less tangible but ultimately more transformational, it’s vital that we think about how to apply concepts of value in advocacy with our eyes wide open.

Do you work in an advocacy or campaigning organisation or do you evaluate advocacy work? If so, we would love to hear from you about how these ideas might apply in your work:

  • Do you recognise the analysis presented?
  • Does this highlight issues that you are grappling with?
  • Does it point to viable solutions?
  • What are the ways you approach assessing the value of advocacy?

Rhonda Schlangen and Jim Coe are independent consultants who work with social change organisations and funders to develop and evaluate advocacy and campaigns. In ‘The Value Iceberg’, a Discussion Paper published by BetterEvaluation, they look at how concepts of 'value' and 'results' are being applied to advocacy and campaigning and present some alternative strategies for assessing advocacy. This blog aims to initiate a discussion of these ideas and how advocacy and campaigning practitioners and evaluators can use them.