monitoring and EVALUATION
This encapsulates all elements of evidence collection and analysis. Often, this involves defined monitoring and evaluation activities, but may also involve outcomes reporting, continuous improvement cycles and seeking feedback from service recipients/beneficiaries.
Monitoring is the routine collection of useful performance data that helps with program management decisions.
Evaluation involves systematic data collection to address questions relating to whether, where, why, how and for whom a given program/service/policy is working. Evaluation projects are most often method-driven, with research designs tailored to investigate the areas of greatest interest.
There is no one-size-fits-all approach to conducting evaluations, with bespoke methods used to match the needs of each evaluation, taking into account budget, time and other constraints.
Policy Performance is well-placed to work with you to determine an appropriate evaluation approach and to conduct rigorous, robust and reader-friendly evaluation reports that triangulate information across multiple sources.
The following approaches may be used:
-
Evaluability assessment (evaluation readiness review): Determining whether a given intervention can be evaluated and advising on appropriate methods
-
Developmental evaluation: Working broadly with stakeholders during the rollout of a program, with the aim of promoting feedback loops and supporting adaptation of the program in real-time
-
Lapsing program evaluation: A structured evaluation approach, often conducted during the latter phases of a program's funding period, with the aim of identifying successes and areas for improvement
-
Longitudinal evaluation: Evaluation conducted over more than one phase to determine changes over time
Policy Performance frequently uses the following evaluation techniques during its evaluation projects:
-
Logic modelling: Helping to understand and visually depict the theory of change (program logic) and theory of action (implementation approach)
-
Randomized Controlled Trials to understand quantifiable impact changes over time, including magnitude and distribution
-
Quasi-experimental designs to apply treatment and control group methods in real-world, non-random settings (e.g. regression discontinuity designs as an example)
-
Monitoring and evaluation frameworks: Developing a comprehensive monitoring and evaluation approach to be implemented across a given program, policy area or organisation, including data collection tools
-
Surveying: Design, implementation and analysis of qualitative or quantitative (Likert) surveys
-
Interviewing: Structured, semi-structured or unstructured (individual or group) stakeholder or fieldwork interviews
-
Focus groups: Conducting facilitated forums with a small number of key questions for a selected sample of attendees
-
Case studies: Developing place-based (e.g. in a school or region) or service-based (e.g. organisation or program area) summaries of practice into a short stand-alone writeup
-
Literature review: Learning about different policy levers and approaches that could be used to achieve desired ambitions, and understanding the results of evaluations into these interventions
-
Benchmarking: Comparing results between different locations, jurisdictions or countries
-
Program data review: Analysis of program data, often to triangulate findings with primary research sources or for data visualisation purposes.