top of page

What does evaluation mean to you?

Communicating Evaluation Findings.png

There is no single, agreed definition of evaluation - it is a broad field.
 

When leading introduction to evaluation training sessions, I ask participants to write down and then share their views on what evaluation means to them - responses vary widely:

          “It’s about testing whether objectives have been met”

          “It’s about collecting good evidence”

          “It’s about finding out what changed over time”

          “It’s about finding ways to improve things”

 

All these can be right, depending on the ambitions of the evaluation. In my mind, evaluation can be broken down into just a few concepts.

 

1 Scoping the evaluand and key questions

The first step is setting boundaries and clearly defining what will be evaluated. Theoretically we can evaluate anything, but practically, we can’t evaluate everything! It is hugely important to clearly conceptualise the work of the evaluation from the outset. In evaluation, failing to plan is most definitely planning to fail.

The word evaluand is used by evaluators to refer to “the thing that is being evaluated”. The evaluand can be broad or narrow. In public sector settings, most often the evaluand is a system, a policy, a program, a process, a project, a collection of grants or even an organisation/team. Defining the evaluand is the critical first step in scoping any piece of work.

Like preparing an abstract for academic research, evaluations also need to be bound by its key areas of inquiry. While academic research is often based on testing a hypothesis, evaluation is more of a ground-up (inductive) process, seeking to investigate areas of most interest, particularly to those who will use the evaluation findings.

 

This is most often achieved through settling on a set of key evaluation questions, with links to sub-questions that help investigate different dimensions of interest (often appropriateness, efficiency, effectiveness). When thinking about key questions, less is more – fewer questions promote greater precision and focus – so there’s no need to over-engineer these. In any case, key themes usually emerge through data collection processes... read on...

 

2 Data

Data is the information and evidence that will help us to explore and answer the various evaluation questions that have been agreed.

 

The starting point here is to consider existing datasets. Most evaluands have, at minimum, program information contained in progress reports, project updates or similar, which provide a good starting point to understand what is being delivered (outputs), how and to whom.

If there are gaps in data, or areas that need further exploration, then existing data needs to be augmented with further targeted primary data collection. There are various ways to do this – usually surveys/questionnaires, focus groups, interviews, taking observations, delivering tests, opinion surveys, etc. Data collection techniques need to consider the appropriate way to engage various stakeholders.

 

In essence, make sure that all data collection is systematic (both well planned and rigorously conducted) to arrive at reliable results (meaning that if the study were repeated, similar results would be achieved) and that help to form findings that stand up to scrutiny based on the validity (that conclusions are based on defensible evidence). Poor levels of research validity can lead to misguided or inaccurate findings.

 

3 Analysis

It is important to spend time analysing the data. If possible, seek to ‘triangulate’ the evidence – that means, check for similarities and differences across different data sources to generate a more rounded view about findings. You might also try to triangulate findings within sources, for example, by looking at how survey responses differ by cohorts (e.g. based on age, gender, location). Allow time for good analysis, as there are often secrets hidden in the data that will be missed if this stage does not get adequate focus.

4 Judgement

This ties the evaluation together and is arguably the whole reason for doing much of the work – it involves the formation and sharing of judgements to respond to the evaluation questions. This is a process of being transparent about the evidence and arriving at findings regarding the impact (effect) of the evaluand on the target (and other) population(s).

The approach to forming sound judgements should be considered in the early phases of the evaluation and then applied to the evidence during the analysis phase. This can be achieved through being transparent about available datasets, performance indicators, baselines and targets. Where these remain grey as an evaluation progresses, there may not be a sound basis for judgements to be made.

 

If so, don't despair. There are other participatory ways to form judgements, such as through comparison with other locations/jurisdictions (benchmarking assessments), sharing case study stories of successes/failures, considering the views of experts (e.g. challenge panels) or by co-developing judgements in collaboration with participants, based on the data and analysis phases.

By this stage, the evaluation is there. The remainder of the evaluation task is really about storytelling... what was the method, what was heard, what were the findings and what are the implications.

 

There you have it – evaluation in a nutshell. Scoping, data, analysis, judgement.

 

bottom of page