This piece is published on Apolitical by the Early Intervention Foundation (EIF)
This handy guide provides guidance on addressing six of the most common issues we see in our assessments of program evaluations, including explanations of how these problems undermine confidence in a study’s findings, how they can be avoided or rectified, case studies and a list of useful resources in each case. Whether you are involved in commissioning, planning or delivering evaluations of early intervention, these are the issues to understand and watch out for.
Why does avoiding these common pitfalls in evaluation matter?
High-quality evidence on ‘what works’ plays an essential part in improving the design and delivery of public services, and ultimately outcomes for the people who use those services. Early intervention is no different: early intervention programs should be commissioned, managed and delivered to produce the best possible results for children and young people at risk of developing long-term problems.
EIF has conducted over 100 in-depth assessments of the evidence for the effectiveness of programs designed to improve outcomes for children. These program assessments consider not only the findings of the evidence – whether the evidence suggests that a program is effective or not – but also the quality of that evidence.
Studies investigating the impact of programs vary in the extent to which they are robust and have been well planned and properly carried out. Less robust and well-conducted studies are prone to produce biased results, meaning that they may overstate the effectiveness of a program. In the worst case, less robust studies may mislead us into concluding a program is effective when it is not effective at all. Therefore, to understand what the evidence tells us about a program’s effectiveness, it is also essential to consider the quality of the process by which that evidence has been generated.
How does this guide help?
In this guide, we identify a set of issues with evaluation design and execution that undermine our confidence in a study’s results, and which we have seen repeatedly across the dozens of program assessments we have done to date. To help address these six common pitfalls, we provide advice for those involved in planning and delivering evaluations – for evaluators and program providers alike – which we hope will support improvements in the quality of evaluation in the UK, and in turn generate more high-quality evidence on the effectiveness of early intervention programs in this country.
You can download EIF’s full report here.
(Picture credit: Pexels)