Evaluation protocol principle 1: Robustness and credibility

Why do harm-minimisation initiatives need ‘robust’ evaluation?

There are two very good reasons. Firstly, it makes good business sense – how else do you demonstrate convincingly to others that something is working, worthwhile and merits the investment put into it. Secondly, the regulator and licensing authorities have a keen interest in evidence that harm-minimisation interventions work. 

What can evaluation do that common sense cannot tell us?

Common sense and opinion has a value but it is a poor substitute for numbers, hard evidence, customer feedback and insights and independent analysis. This is especially when you need to show that something is delivering what you expected and is working well (or is not). Opinion without this sort of evidence will always risk lacking in credibility and will be open to challenge, especially by sceptics and those holding purse-strings.

I am surrounded by data about what we are doing and what it costs; what can evaluation add to that?

Available data is a good start. In areas like machine-based play it may even provide for all (or most of) what you need. But data still needs collating and making sense of, and independent evaluators will provide for a more credible and trusted assessment. Available data is often not sufficient – especially on understanding outcomes and impact, not just outputs and process.

Harm-minimisation is aimed at making a difference to players (and staff). If you are only able to talk about what outputs or participation an intervention has, it will fall short of showing if it works – if it makes a difference.

When is the right time to start preparation for an evaluation?

As soon as possible and its always best if an evaluation is planned alongside the intervention itself.

You may even need to start evaluation before an intervention gets underway – what are called 'ex ante' evaluations can be used to help plan an intervention and/or to provide for ‘baseline’ information which will be used later to see how much difference the intervention has made.  The case study "gambling prevalence ex ante evaluation" shows how useful these can be.

What types of evaluation should I be using?

Evaluation is done to help make decisions about things like cost-effectiveness, impacts, transferability (can a pilot be rolled out?). So, the type of evaluation you need will depend on what you are evaluating and how you need to use the evidence. This comes down to four choices – process, economic, impact and plural evaluations.

  • A process evaluation – which evaluates the mechanisms through which an intervention takes place, its outputs (not outcomes) and effectiveness.
  • An economic evaluation – which evaluates the costs of inputs, outputs or outcomes or overall value of an action.
  • An impact evaluation – which evaluates intervention outcomes or longer term impacts (the consequential changes resulting from an intervention).
  • A plural evaluation – evaluations which combine two or more of these approaches.

Resource A sets out some of the things you will need to think about in making the right choice.

Is a cost-benefit review the same as an economic evaluation; will I need special expertise to make it credible and robust?

Economic evaluations are based on principles of cost-benefit analysis. So, to do them needs an understanding of applied economics, but not all economic evaluations have to be complicated. Some may be quite straightforward and looking only at costs (‘cost-description’ evaluations) or cost-effectiveness, where the complexity will depend on the nature of what’s being evaluated, its expected inputs and outputs and possibly also outcomes.

Cost-benefit evaluations are the least straightforward and even for relatively simple harm-minimisation initiatives are likely to be highly complex and will almost certainly need specialised evaluators to provide robust evidence.

How do I handle the ethical side of an evaluation?

Ethics in evaluation is complex but for industry harm-minimisation evaluations it mainly concerns the way new evidence is collected and used. Crucially, it needs to ensure that what and how an evaluation is done does not place anyone involved at undue risk of harm. Research especially with vulnerable people such as problem or at-risk gamblers, need to be able to show that their arrangements for selecting participants, briefing them (and securing informed consent to take part), collecting evidence, storing and reporting it, are ethically sound.

All evaluation should give suitable consideration to research ethics and, in some circumstances, may need to go through a formal ethical clearance process.

Is it possible to do a robust impact evaluation when it is not straightforward to define impacts?

Yes. The challenge is in first defining a small number of appropriate and measurable impacts. A good impact evaluation will have a sharp focus on what is relevant and possible to measure and understand. It will often combine both 'hard impacts' (e.g., lower levels of player debt) and 'soft impacts' (e.g., player awareness of risk behaviours). They may also look out for ‘indirect’ impacts – unexpected consequences or effects from the initiative

OK but aren’t some impacts impractical to get to grips with, such as minimised harm?

No. Harm-minimisation and responsible gambling are certainly challenging to reduce to a handful of things to be measured, but it can be done by looking at what the initiative is about, what it is delivering and what is expected to change as a result (in both short term ‘outcomes’ and longer term ‘impacts’).

It is important to start off with a model of such expectations – sometime called a ‘theory of change’ which is a great tool not only in planning the intervention and its focus but also an aide to defining what aspects of harm-minimisation or behaviour change need to be assessed by evaluation.

What is counterfactual analysis and how do I do it?

It is one thing to measure change occurring, for example, to players or participants in an RG intervention; quite another to assess how much of that change resulted directly from taking part (and not other influences). The ‘counterfactual’ uses tried and tested methods to measure what would otherwise have happened if the initiative had not taken place at all. Resource B sets out what some of these methods are, and where they are best fitted.

Evaluation results will not be robust or credible if they cannot say what contribution the initiative is likely to have made to the measured changes. Randomised control trials – RCTs – are often said to be the ‘Gold Standard’ for impact evaluation, but other counterfactual approaches include ‘quasi-experimental’ and ‘non-experimental methods.  RCTs will rarely be relevant to the majority of RG evaluation but quasi-experimental and non-experimental can often provide all you need to show what’s working and how well. Resource C provides a ready reckoner tool for the most robust methods – RCTs and quasi-experimental.

What role does qualitative evidence play?

In general, qualitative methods such as case studies, beneficiary or customer interviews or focus groups, are a valuable part of process evaluations.  They are also often a key part of impact evaluations where they can say things about how and why interventions work which relying on quantitative methods alone may not. Only in RCTs are they are difficult to combine with quantitative evidence. Resource A also gives some ideas of where they best fit in.

Evaluation protocol principle 2: Proportionality

How do I manage for realistic expectations of evaluation?

A starting point for any evaluation is having a very good handle on what can be realistically be expected of it, set out in aims and objectives. This comes ahead of any choices on methods and design. It needs those setting the terms of the evaluation to be both clear about what is needed for the evaluation and to agree what is realistic for it to do.

Resource D sets out a handy tool – the R-O-T-U-R framework – for doing this.  And remember, any expectation which is not clear or is unrealistic needs to be renegotiated or recognised as going further than the evaluation can accommodate.

How do I make an evaluation proportionate?

Most harm-minimisation evaluations do not need to be overly technical and the task is about deciding what is possible and practical. This is about making an evaluation ‘proportionate’ to needs and circumstances. This involves taking account of the state of play of an intervention (whether it is a trial, pilot or running for some time), its level of innovation, its complexity (maybe it involves several inter-related activities not just one), and how long it is to run for before the evaluation concludes. Resource E provides a checklist for some of the factors to balance in making your evaluation proportionate.

What should a good evaluation cost?

There is no straightforward answer to this often asked question; and no ready yardsticks.  A good evaluation will be proportionate, led by needs and cost effective but its actual budget will depend on many things, including if it is being done internally or externally.

The evaluation budget may already be fixed before a design is put in place, so the evaluator will either need to manage down expectations to work inside the budget, or convince the budget holder to spend more. It is best for the evaluation to be anticipated when an intervention is planned and with a separate cost heading for it allowed in the overall budget.

When specifying an external evaluation, it may be best to indicate a broad cost range (under £25,000; £60-90,000; etc.) rather than a specific budget and encourage bidders to put in any added-value options if they can justify these. This is a good way to get best value (but not usually at lowest) cost.

Evaluation protocol principle 3: Independence

What is needed for an evaluation to be independent?

An independent evaluation is able to demonstrate management of conflict of interests and impartiality in how it interprets evidence and reaches conclusions. In practical terms, independent evaluators may well be familiar with agencies/companies or interventions that they are evaluating, as long as they have not been involved in its pre-evaluation planning or implementation or otherwise have a stake in its outcomes. An independent evaluation will also need to ensure that the working arrangements with the client and the practical management and steering of the evaluation upholds the evaluator’s impartiality and independence of judgment.

When is it best to conduct an evaluation in-house or to commission outside evaluators?

An internal evaluation will always face challenges in demonstrating independence because those conducting will be seen as having an interest in the success (or failure) of the initiative. Steps can be put in place to separate the internal evaluators from the intervention delivery but their judgements will still risk being seen as being ‘compromised’ by being part of the delivery agency or company.

If what is being evaluated is sensitive or controversial, or where findings will be met by stakeholders with pre-set opinions or by doubters; it always best to conduct an evaluation externally, through a procurement and management process which can demonstrate its impartiality. Resource F sets out some of the pros and cons of internal and external evaluations.

Where do I go for a competent evaluator who can do the job on time?

Choosing a reliable evaluator is usually the most important decision for any independent evaluation. Even if you have a structured procurement process to follow you can give this a helping hand by making sure some expert and established specialist evaluators know about the tendering process and its timing. You will need someone who can show they are not conflicted, with a track record of systematic evaluation, an appropriate mix of quantitative and qualitative methods, and who can prove past delivery and credible and comprehensible reporting.

Be wary of picking gambling sector specialists or consultants who may have research skills but are not genuine experts in evaluation. Gamble Aware has its own list of evaluation specialists who might provide a starting point.

Evaluation protocol principle 4: Transparency

What is a transparent evaluation?

Transparency is important to evaluation because, like independence and impartiality, it helps to build confidence and credibility in findings and conclusions. This means evaluations should be as open as possible, through the whole process – the intervention rationale, evaluation objectives and who is funding it; who is doing the evaluation and how they were selected; the evaluation plan, methods and data; as well as sharing the results and conclusions (and their limitations).

Transparent evaluation also analyses and sets out in reporting both its 'reliability' (the quality and strength of the methods used) and 'validity' (of the evidence and the generalisability of the findings).

Commercial considerations might limit some aspects of transparency, but the more open and transparent an evaluation can be the more it is likely to increase confidence and credibility. For some evaluations transparency may also mean respect for third-party interests and clarity about any rights worthy of protection.

How does an evaluation ‘constructively’ engage to help build confidence and credibility?

For an evaluation to constructively engage it needs to be as open as possible, and accessible to those who might want to ask questions of why it’s being done and how, at all stages.  It is easier for an evaluation to get on with the job and leave external dialogue to once the final report is wrapped up, but to do so may miss opportunities for evidence sharing and critical review and may increase suspicion about the evaluation or evaluators.

Just how much constructive engagement is possible depends on the nature of what is being evaluated, stakeholder relationships and expectations and issues of data or commercial sensitivity. But within most evaluations there are many opportunities to engage outside the evidence collection. This does not mean changing direction or methods because a stakeholder ask for it; but it may mean explaining why some things cannot be done and what’s being done instead. Well managed this takes nothing away from evaluation independence and impartiality and adds a lot to credibility and confidence.  Resource G sets out some of the many possibilities.