Compare Results to the Counterfactual

One of the three tasks involved in understanding causes is to compare the observed results to those you would expect if the intervention had not been implemented - this is known as the 'counterfactual'.

Many discussions of impact evaluation argue that it is essential to include a counterfactual.  Some people however argue that in turbulent, complex situations, it can be impossible to develop an accurate estimate of what would have happened in the absence of an intervention, since this absence would have affected the situation in ways that cannot be predicted. In situations of rapid and unpredictable change, when it might not be possible to construct a credible counterfactual it might be possible to build a strong, empirical case that an intervention produced certain impacts, but not to be sure about what would have happened if the intervention had not been implemented.

For example, it might be possible to show that the development of community infrastructure for raising fish for consumption and sale was directly due to a local project, without being able to confidently state that this would not have happened in the absence of the project (perhaps through an alternative project being implemented by another organization). 

For a discussion about counterfactual approaches to causal inference, see The Stanford Encyclopedia of Philosophy entry


There are three clusters of options for this task:

Experimental options (or research designs)

Develop a counterfactual using a control group. Randomly assign participants to either receive the intervention or to be in a control group. 

  • Control Group: a group created through random assignment who do not receive a program, or receive the usual program when a new version is being evaluated. An essential elements of the Randomized Controlled Trial approach to impact evaluation. 

Quasi-experimental options (or research designs)

Develop a counterfactual using a comparison group which has not been created by randomization.  

  • Difference-in-Difference (or Double Difference): comparing the before-and-after difference for the group receiving the intervention (where they have not been randomly assigned) to the before-after difference for those who did not.
  • Instrumental variables: estimating the causal effect of an intervention. 
  • Judgemental matching: involves creating a comparison group by finding a match for each person or site in the treatment group based on researcher judgements about what variables are important.
  • Matched comparisons: matching participants (individuals, organizations or communities) with a non-participant on variables that are thought to be relevant. 
  • Propensity scores: statistically creating comparable groups based on an analysis of the factors that influenced people’s propensity to participate in the program.
  • Regression Discontinuity: comparing the outcomes of individuals just below the cut-off point with those just above the cut-off point. 
  • Sequential allocation: a treatment group and a comparison group are created by sequential allocation (e.g. every 3rd person on the list).
  • Statistically created counterfactual: developing a statistical model, such as a regression analysis, to estimate what would have happened in the absence of an intervention.  

Non-experimental options

Develop a hypothetical prediction of what would have happened in the absence of the intervention. 

  • Key informant: asking experts in these types of programmes or in the community to predict what would have happened in the absence of the intervention.
  • Logically constructed counterfactual: using the baseline as an estimate of the counterfactual. Process tracing can support this analysis at each step of the theory of change. 


  • Randomized controlled trial (RCT): creates a control group and compares this to one or more treatment groups to produce an unbiased estimate of the net effect of the intervention.




Anonymous's picture
Sitwat Naeem

The most important part in any social intervention the participation of the people.Participation increase when people feel that the intervention is producing results and bringing change in their lives. The change, of course will be slow but if it is visible, it accepted.   

Nick Herft's picture
Nick Herft

Hi Sitwat,

There's an upcoming Q&A on participation. You can register to it here and submit a question to be answered by Irene Guijt and Leslie Groves.

Irene and Leslie have recently released four blog posts on Participation in Evaluation, you can find them here.

Anonymous's picture
Steve Powell

Great article, this site is the best on the web for M&E overview. Just a tiny comment. It sounds like you are saying a control group has to be randomly assigned: "Develop a counterfactual using a control group. Randomly assign participants to either receive the intervention or to be in a control group. 

I think it is more usual to say that any comparison group (like those involved in all the other examples you give) is a control group , and when you have randomised assignment, that is called, well, a randomised control group! Hence, "randomised controlled trials" are different from "controlled trials".

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.