EVALUATION

Evaluation broadly encapsulates all elements of evidence collection and analysis.

Evaluation involves systematic data collection to address questions relating to whether, where, why, how and for whom a given intervention is working. Evaluation projects are most often method-driven, with research designs tailored to investigate the areas of greatest interest.

A rich body of academic research and theory drives the construction of research approaches across the evaluation field. The academic debates about evaluation practice confirm that there is no one-size-fits-all approach to conducting evaluations, with bespoke methods used to match the needs of each evaluation, taking into account budget, time and other constraints.

Policy Performance is well-placed to work with you to determine an appropriate evaluation approach and to conduct rigorous, robust and reader-friendly evaluation reports that triangulate information across multiple sources.

 

The following approaches may be used:

  • Evaluability assessment (evaluation readiness review): Determining whether a given intervention can be evaluated and advising on appropriate methods 

  • Developmental evaluation: Working broadly with stakeholders during the rollout of a program, with the aim of promoting feedback loops and supporting adaptation of the program in real-time  

  • Lapsing program evaluation: A structured evaluation approach, often conducted during the latter phases of a program's funding period, with the aim of identifying successes and areas for improvement

  • Longitudinal evaluation: Evaluation conducted over more than one phase to determine changes over time

Policy Performance frequently uses the following evaluation techniques during its evaluation projects:

  • Logic modelling: Helping to understand and visually depict the theory of change (program logic) and theory of action (implementation approach)

  • Monitoring and evaluation frameworks: Developing a comprehensive monitoring and evaluation approach to be implemented across a given program, policy area or organisation, including data collection tools

  • Surveying: Design, implementation and analysis of qualitative or quantitative (Likert) surveys

  • Interviewing: Structured, semi-structured or unstructured (individual or group) stakeholder or fieldwork interviews

  • Focus groups: Conducting facilitated forums with a small number of key questions for a selected sample of attendees

  • Case studies: Developing place-based (e.g. in a school or region) or service-based (e.g. organisation or program area) summaries of practice into a short stand-alone writeup

  • Literature review: Learning about different policy levers and approaches that could be used to achieve desired ambitions, and understanding the results of evaluations into these interventions

  • Benchmarking: Comparing results between different locations, jurisdictions or countries

  • Program data review: Analysis of program data, often to triangulate findings with primary research sources or for data visualisation purposes.

Charlie Tulloch is a board member of the Australasian Evaluation Society (AES).

Contact us to discuss all your evaluation needs.