A short glossary of selected evaluation terms.

Evaluation is full of technical terms. Many are for quite common sense ideas. Those set out here should provide a useful starter guide to unpicking what is meant.


This is the change or impact measured or observed from an evaluation of an intervention which is over and above what was expected.


A finding from an impact evaluation which shows just how much the intervention itself was itself responsible for the outcomes and impacts being measured [see also causality and counterfactual below].

Before and after analysis

A simple (non-experimental) method which helps to estimate attribution by contrasting outcomes during an intervention (and perhaps at the end of a pilot or trial) with data before the intervention took place.

Blinded evaluation

A blind, or blinded, evaluation is where information about the test is masked from the participant and others until after the evaluation outcome is known. This is an important part of a Randomised Control Trial and ensures the results cannot be biased by (inadvertently) distorting the behaviour of people participating in or otherwise involved the trial.


A finding or observation from an evaluation of an intervention which digs deeper than looking at the ‘overall’ (gross) impacts and measures or estimates that part of the gross impact which can be directly attributed to the intervention itself – the ‘net’ impact. In other words, a ‘causal’ analysis separates out (discounts) the contributions of the intervention itself from any other (e.g., external) contributions to the impacts achieved.  NB. One reliable way of doing this is for the evaluation to develop what is called a ‘counterfactual’ analysis or case – as below.

Comparative group

A method in quasi-experimental (impact) evaluation often used instead of a Randomised Control Trial, and which contrasts the measured outcomes in an intervention area (or group of people) with a very closely matched comparison group (e.g., a like-for-like geographical area). The contrast can be used to demonstrate causality or the added-value or additionality of the intervention.

Control group (and analysis)

A method of impact evaluation used in Randomised Control Trials which assesses causality of impacts by contrasting the results for the beneficiaries or participants (of an intervention) with a closely matched, randomly selected, ‘non-intervention’ or control group.

Counterfactual analysis

An analysis as part of an impact evaluation which sets out to identify what would have occurred if an intervention or activity had not been implemented and comparing this to the measured outcomes after the intervention. Control groups (in an Randomised Control Trial) or comparison groups (in a quasi-experimental evaluation) are reliable ways of doing this.


An identified impact or benefit (or part of it) from an intervention which the evaluation shows would have happened even if the intervention had not taken place – a ‘deadweight’ effect.

Gross impact

An overall (non-attributed) outcome or impact resulting from an (evaluated) intervention or activity (see impact below).

Hybrid evaluation

An evaluation methodology using mixed methods – and typically combining quantitative and qualitative methods to contrast and triangulate (see below) different evidence sources.


An observed effect resulting from an (evaluated) intervention and as a consequence of delivering or achieving specific activities or ‘outputs’. This is usually associated with measuring medium or longer-term changes (e.g., sustained behaviour changes) and which may take some time to be realised; outcomes (see below) refer to shorter term changes.

Knock-on impact

An unexpected, unintended or indirect consequential effect of an (evaluated) intervention (see impact above). 


Effects within measured impacts which support others outside the targeted or expected intervention group.


The process which results in an outcome or impact being converted or translated into a quantified cash or financial value.

Net impact

An outcome or impact attributed to a specific intervention or activity which discounts changes which would have otherwise have occurred without the (evaluated) intervention or activity having taken place.

Opportunity cost

A benefit, profit, or value of something that must be given up to acquire or achieve something else.  NB Economist use this to assess the real return on an investment or intervention and refer to it as the next best alternative foregone.


A short term effect resulting from an (evaluated) intervention and usually resulting as an early consequence of delivering or achieving specific activities or ‘outputs’ (see also impact above).

Primary evidence

Quantitative and/or qualitative evidence in an evaluation which is generated directly by the evaluator (or on their behalf) from additional information collection methods.


The principle of evaluation design which sets out that in addition to the need for reliable information, the choice and mixture of evidence gathering and analytical methods to be used should be ‘proportionate’ to the objectives, scale and nature of the programme being evaluated.

Secondary evidence

Quantitative and/or qualitative evidence in an evaluation which is collated from existing sources of evidence within or outside an intervention including from, for example, management or monitoring information and diverse documentary sources.


Measured outcomes or impacts (or aspects of them) on an intervention group which are realised at the expense of others outside the intervention group, often as unintended consequences from the intervention (see below).

Triangulated evidence

Triangulation is a commonly used approach in all forms of evaluation that provides for validation of both quantitative and qualitative evidence through cross verification from two or more sources, typically derived from combination of several research methods in assessing the same phenomenon.

Unintended consequences

Unexpected impacts and effects of (evaluated) interventions and activities which need to be identified and taken into account in any assessment of net impacts.


Techniques for measuring or estimating the monetary and/or non-monetary value (see above) of observed outcomes and impacts, contributing to an assessment of added value or cost-effectiveness of the evaluated intervention.

Value for Money

Value for money (VfM) measures the extent to which an intervention (or sets of activities) has provided the maximum benefit for funding bodies from the resourcing of activities, benefits secured and outcomes and impacts arising. VfM provides a quantitative measure, typically for specific goods or services, or combinations of these.