Other relevant organisational policies

Organisational enabling policies are guidelines or rules that create a conducive environment for specific practices within an organisation to function effectively. These policies are not directly focused on Monitoring and Evaluation (M&E) but indirectly impact how M&E activities are carried out.

For instance, a performance management policy might set the framework for measuring employee performance, which in turn could influence the metrics used in M&E. Result-based budgeting policies could dictate how resources are allocated, affecting the scope and scale of M&E activities. Procurement policies might influence the tools and services used for M&E, while recruitment policies could affect the level of expertise available in the organisation for conducting M&E.

Examples

Program Cycle Operational Policy, USAID 2022

USAID's Program Cycle and Operational Policy sets out required processes for designing and implementing programs, including how monitoring, evaluation and collaborating, learning and adapting  (CLA) fit into the program cycle.

The policy includes definitions of monitoring, evaluation and CLA, principles that should be used to guide them, types of monitoring, types of evaluation, and types of indicators, as well as strategies to improve and ensure quality.

It also sets out specific requirements for monitoring and evaluation – for example how indicators will be documented and reviewed:

"Operating Units (OUs) must collect and maintain indicator reference information and document that information in a Performance Indicator Reference Sheet (PIRS) Guidance & Template to promote the quality and consistency of data across the Agency. A Performance Indicator Reference Sheet documents the definition, purpose, and methodology of the indicator to ensure that all parties that are collecting and using the indicator have the same understanding of its content." (p.87)

“To ensure high-quality performance monitoring data, OUs must conduct a DQA for each performance indicator they report to external entities, including but not limited to indicators reported in the PPR. OUs may not externally report any USAID data that has not had a DQA.” (p. 91)

The policy provides direction in terms of evaluation design for impact evaluations:

USAID evaluations should use the highest level of rigor appropriate to the evaluation question. When USAID needs information on whether a specific outcome is attributed to and achieved by a specific intervention, the Agency prefers the use of impact evaluations.” (pp17-18).

“Impact evaluations must use at least one of the following evaluation methods and approaches to credibly define a counterfactual:

  • Experimental Design: Random assignment of an intervention among members of the eligible population is used to eliminate selection bias, so there are those who receive the intervention(s) (treatment group) and those who do not (control group). This type of design is also called a Randomized Controlled Trial (RCT).
  •  Quasi-Experimental Design: In the absence of an experimental design, a comparison group may be generated through rigorous statistical procedures, such as propensity-score matching, regression discontinuity, or analysis with instrumental variables. These difference-in-difference designs are only appropriate if it can demonstrate that, in the absence of treatment, the differences between a treatment and a non-randomly chosen comparison group would be constant over time.” (pp. 95-96)

United States Agency for International Development. (2022, September 28). ADS Chapter 201 - Program Cycle Operational Policy. Retrieved from https://www.usaid.gov/sites/default/files/2022-12/201.pdf

Expand to view all resources related to 'Other relevant organisational policies'

'Other relevant organisational policies' is referenced in: