This piece was written by Edward Orlik, a policy advisor for the What Works Network. It is part of a new Apolitical series of the best government blogs from around the world.
Last month marked exactly five years since the What Works Network was established to feed better evidence into the way we make decisions across the public sector.
We’ve come a long way since 2013. The network now comprises 10 independent research centres in areas such as health, education and children’s social care.
• If you’re interested in writing an opinion piece, take a look at Apolitical’s guide for contributors.
But these are no ordinary research centres. They tell us what works in their respective policy areas by summarising the existing evidence base. Where the available evidence is weak, a number of centres have commissioned trials to provide commissioners, policymakers and frontline workers with the answers they need. And these centres have developed innovative ways of helping decision makers act on this evidence.
Alongside the network, we’ve also established the Cross-Government Trial Advice Panel, which helps civil servants to design, deliver and analyse high-quality impact evaluations. With almost 50 projects under our belt we’ve learnt a thing or two about designing robust policy experiments.
Here are five lessons we’ve learnt along the way.
1. Randomised controlled trials aren’t the only way of assessing the impact of a policy or program.
While they are still considered the gold standard when it comes to measuring the impact of a policy, in situations where they are impractical or unethical, the Trial Advice Panel has sometimes recommended quasi-experimental designs like propensity score matching or regression discontinuity analysis.
Quasi-experiments feature an artificial control group without the need for randomly assigning treatments to participants in the trial. They’re particularly useful in ethically sensitive situations where you have good data about participants — for instance, when sentencing offenders.
2. It’s socially acceptable to experiment on children!
In the Education Endowment Foundation’s (EEF) early days, there were concerns that testing out teaching approaches on children would be viewed as unethical and would not be accepted by schools. Five years on, the EEF has conducted over 150 trials, in which over a third of English schools have volunteered to take part. As a result, the EEF is responsible for an estimated 10% of education trials worldwide and their evidence is trusted by thousands of headteachers across the country.
3. Finding out what doesn’t work is just as important as understanding what does.
It turns out that antibiotics are ineffective in treating most cases of sinusitis (despite being routinely prescribed). Transferring young people from the juvenile to the adult criminal justice system makes reoffending more likely. Repeating a school year, school uniforms, peer-to-peer teacher observation, and new school buildings do nothing for pupil attainment. Only by finding out what doesn’t work — and being transparent about it — can we identify where money can be saved and re-invested in effective interventions.
4. Mobilising evidence is harder than we first imagined.
Simply making evidence available isn’t enough to change practice on the ground. In one of their recent trials (dubbed the Literacy Octopus), the EEF tested various methods of engaging with teachers, including printed practice guides, conferences and webinars, and found that none increased the likelihood of teachers adopting recommended practices in the classroom.
Many of the What Works Centres are now trying to accelerate the adoption of new evidence through more sustained engagement with practitioners (for example, the Early Intervention Foundation’s Early Intervention Police Academy, the Centre for Ageing Better’s strategic partnerships with local authorities, and the EEF’s Research Schools Network).
5. Short-term effects don’t always mean long-term benefits.
The Early Intervention Foundation (EIF) has assessed over 100 programs and has warned that studies which do not assess long-term outcomes (at least one year post-intervention) — or do not assess them well — cannot tell us if short-term effects persist. How do we know, for example, that a program to help people find employment delivers meaningful results if we only track participants for six months after they start a new job? In order to avert issues like these, the EIF has recently released new guidance for evaluators. — Edward Orlik
This article was originally published on the What Works blog and can be found here.
(Picture credit: Flickr/Howard County Library)